Search This Blog

Tuesday, May 21, 2024

Digital electronics

From Wikipedia, the free encyclopedia
 
Digital electronics
A digital signal has two or more distinguishable waveforms, in this example, high voltage and low voltages, each of which can be mapped onto a digit.
 
An industrial digital controller

Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics which work primarily with analog signals. Despite the name, digital electronics designs includes important analog design considerations.

Digital electronic circuits are usually made from large assemblies of logic gates, often packaged in integrated circuits. Complex devices may have simple electronic representations of Boolean logic functions.

History

The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the brain-child of George Boole in the mid 19th century. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification of the Fleming valve in 1907 could be used as an AND gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the 1954 Nobel Prize in physics, for creating the first modern electronic AND gate in 1924.

Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed, with the term digital being proposed by George Stibitz in 1942. Originally they were the size of a large room, consuming as much power as several hundred modern PCs.

The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it was the world's first working programmable, fully automatic digital computer. Its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming.

At the same time that digital calculation replaced analog, purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents. John Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948.

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes. Their "transistorised computer", and the first in the world, was operational by 1953, and a second version was completed there in April 1955. From 1955 and onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space.

While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas concerning the integrated circuit (IC), then successfully demonstrated the first working integrated circuit on 12 September 1958. Kilby's chip was made of germanium. The following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The basis for Noyce's silicon IC was the planar process, developed in early 1959 by Jean Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method developed in 1957. This new technique, the integrated circuit, allowed for quick, low-cost fabrication of complex circuits by having a set of electronic circuits on one small plate ("chip") of semiconductor material, normally silicon.

The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET's advantages include high scalability, affordability, low power consumption, and high transistor density. Its rapid on–off electronic switching speed also makes it ideal for generating pulse trains, the basis for electronic digital signals, in contrast to BJTs which, more slowly, generate analog signals resembling sine waves. Along with MOS large-scale integration (LSI), these factors make the MOSFET an important switching device for digital circuits. The MOSFET revolutionized the electronics industry, and is the most common semiconductor device.

In the early days of integrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. The wide adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip. Following the wide adoption of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed, and good designs required thorough planning, giving rise to new design methods. The transistor count of devices and total production rose to unprecedented heights. The total amount of transistors produced until 2018 has been estimated to be 1.3×1022 (13 sextillion).

The wireless revolution (the introduction and proliferation of wireless networks) began in the 1990s and was enabled by the wide adoption of MOSFET-based RF power amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS). Wireless networks allowed for public digital transmission without the need for cables, leading to digital television, satellite and digital radio, GPS, wireless Internet and mobile phones through the 1990s–2000s.

Properties

An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused by noise. For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s.

In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easily scalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of the signal chain.

With computer-controlled digital systems, new functions can be added through software revision and no hardware changes are needed. Often this can be done outside of the factory by updating the product's software. This way, the product's design errors can be corrected even after the product is in a customer's hands.

Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use of redundancy permits the recovery of the original data provided too many errors do not occur.

In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit the use of digital systems. For example, battery-powered cellular phones often use a low-power analog front-end to amplify and tune the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can easily be reprogrammed to process the signals used in new cellular standards.

Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.

If a single piece of digital data is lost or misinterpreted, in some systems only a small error may result, while in other systems the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly as linear pulse-code modulation causes, at worst, a single audible click. But when using audio compression to save storage space and transmission time, a single bit error may cause a much larger disruption.

Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data.

Construction

A binary clock, hand-wired on breadboards

A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of Boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates.

Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices", though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software.

Integrated circuits consist of multiple transistors on one silicon chip, and are the least expensive way to make large number of interconnected logic gates. Integrated circuits are usually interconnected on a printed circuit board which is a board which holds electrical components, and connects them together with copper traces.

Design

Engineers use many methods to minimize logic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. These operations are typically performed within a computer-aided design system.

Embedded systems with microcontrollers and programmable logic controllers are often used to implement digital logic for complex systems that do not require optimal performance. These systems are usually programmed by software engineers or by electricians, using ladder logic.

Representation

A digital circuit's input-output relationship can be represented as a truth table. An equivalent high-level circuit uses logic gates, each represented by a different shape (standardized by IEEE/ANSI 91–1984). A low-level representation uses an equivalent circuit of electronic switches (usually transistors).

Most digital systems divide into combinational and sequential systems. The output of a combinational system depends only on the present inputs. However, a sequential system has some of its outputs fed back as inputs, so its output may depend on past inputs in addition to present inputs, to produce a sequence of operations. Simplified representations of their behavior called state machines facilitate design and test.

Sequential systems divide into two further subcategories. "Synchronous" sequential systems change state all at once when a clock signal changes state. "Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made using flip flops that store inputted voltages as a bit only when the clock changes.

Synchronous systems

A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is connected to the clock signal, and update together.

The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a state register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational logic.

Asynchronous systems

Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates.  Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters.

Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called "self-resynchronization"). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable—that is—real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components.

Register transfer systems

Example of a simple circuit with a toggling output. The inverter forms the combinational logic in this circuit, and the register holds the state.

Many digital systems are data flow machines. These are usually designed using synchronous register transfer logic and written with hardware description languages such as VHDL or Verilog.

In register transfer logic, binary numbers are stored in groups of flip flops called registers. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called a bus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses.

Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs.

Computer design

Intel 80486DX2 microprocessor

The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unit, memory and other parts of the computer, including the microsequencer itself. In this way, the complex task of designing the controls of a computer is reduced to the simpler task of programming a collection of much simpler logic machines.

Almost all computers are synchronous. However, asynchronous computers have also been built. One example is the ASPIDA DLX core. Another was offered by ARM Holdings. They do not, however, have any speed advantages because modern computer designs already run at the speed of their slowest component, usually memory. They do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise. They are used in some radio-sensitive mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode.

Computer architecture

Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way possible for a specific purpose. Computer architects have put a lot of work into reducing the cost and increasing the speed of computers in addition to boosting their immunity to programming errors. An increasingly common goal of computer architects is to reduce the power used in battery-powered computer systems, such as smartphones.

Design issues in digital circuits

Digital circuits are made from analog components. The design must assure that the analog nature of the components does not dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances.

Bad designs have intermittent problems such as glitches, vanishingly fast pulses that may trigger some logic but not others, runt pulses that do not reach valid threshold voltages.

Additionally, where clocked digital systems interface to analog systems or systems that are driven from a different clock, the digital system can be subject to metastability where a change to the input violates the setup time for a digital input latch.

Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity.

Automated design tools

Much of the effort of designing large logic machines has been automated through the application of electronic design automation (EDA).

Simple truth table-style descriptions of logic are often optimized with EDA that automatically produce reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is the Espresso heuristic logic minimizer. Optimizing large logic systems may be done using the Quine–McCluskey algorithm or binary decision diagrams. There are promising experiments with genetic algorithms and annealing optimizations.

To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinational logic of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and their associated output signals.

Often, real logic systems are designed as a series of sub-projects, which are combined using a tool flow. The tool flow is usually controlled with the help of a scripting language, a simplified computer language that can invoke the software design tools in the right order. Tool flows for large logic systems such as microprocessors can be thousands of commands long, and combine the work of hundreds of engineers. Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions on how to draw the transistors and wires on an integrated circuit or a printed circuit board.

Parts of tool flows are debugged by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs and highlight discrepancies between the simulated behavior and the expected behavior. Once the input data is believed to be correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors.

The functional verification data are usually called test vectors. The functional test vectors may be preserved and used in the factory to test whether newly constructed logic works correctly. However, functional test patterns do not discover all fabrication faults. Production tests are often designed by automatic test pattern generation software tools. These generate test vectors by examining the structure of the logic and systematically generating tests targeting particular potential faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section).

Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Software that are designed for manufacturability add interference patterns to the exposure masks to eliminate open-circuits, and enhance the masks' contrast.

Design for testability

There are several reasons for testing a logic circuit. When the circuit is first developed, it is necessary to verify that the design circuit meets the required functional, and timing specifications. When multiple copies of a correctly designed circuit are being manufactured, it is essential to test each copy to ensure that the manufacturing process has not introduced any flaws.

A large logic machine (say, with more than a hundred logical variables) can have an astronomical number of possible states. Obviously, factory testing every state of such a machine is unfeasible, for even if testing each state only took a microsecond, there are more possible states than there are microseconds since the universe began!

Large logic machines are almost always designed as assemblies of smaller logic machines. To save time, the smaller sub-machines are isolated by permanently installed design for test circuitry, and are tested independently. One common testing scheme provides a test mode that forces some part of the logic machine to enter a test cycle. The test cycle usually exercises large independent parts of the machine.

Boundary scan is a common test scheme that uses serial communication with external test equipment through one or more shift registers known as scan chains. Serial scans have only one or two wires to carry the data, and minimize the physical size and expense of the infrequently used test logic. After all the test data bits are in place, the design is reconfigured to be in normal mode and one or more clock pulses are applied, to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-flops or latches in the scan shift register(s). Finally, the result of the test is shifted out to the block boundary and compared against the predicted good machine result.

In a board-test environment, serial to parallel testing has been formalized as the JTAG standard.

Trade-offs

Cost

Since a digital system may use many logic gates, the overall cost of building a computer correlates strongly with the cost of a logic gate. In the 1930s, the earliest digital logic systems were constructed from telephone relays because these were inexpensive and relatively reliable.

The earliest integrated circuits were constructed to save weight and permit the Apollo Guidance Computer to control an inertial guidance system for a spacecraft. The first integrated circuit logic gates cost nearly US$50, which in 2023 would be equivalent to $515. Mass-produced gates on integrated circuits became the least-expensive method to construct digital logic.

With the rise of integrated circuits, reducing the absolute number of chips used represented another way to save costs. The goal of a designer is not just to make the simplest circuit, but to keep the component count down. Sometimes this results in more complicated designs with respect to the underlying digital logic but nevertheless reduces the number of components, board size, and even power consumption.

Reliability

Another major motive for reducing component count on printed circuit boards is to reduce the manufacturing defect rate due to failed soldered connections and increase reliability. Defect and failure rates tend to increase along with the total number of component pins.

The failure of a single logic gate may cause a digital machine to fail. Where additional reliability is required, redundant logic can be provided. Redundancy adds cost and power consumption over a non-redundant system.

The reliability of a logic gate can be described by its mean time between failure (MTBF). Digital machines first became useful when the MTBF for a switch increased above a few hundred hours. Even so, many of these machines had complex, well-rehearsed repair procedures, and would be nonfunctional for hours because a tube burned-out, or a moth got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs greater than 82 billion hours (8.2×1010 h). This level of reliability is required because integrated circuits have so many logic gates.

Fan-out

Fan-out describes how many logic inputs can be controlled by a single logic output without exceeding the electrical current ratings of the gate outputs. The minimum practical fan-out is about five. Modern electronic logic gates using CMOS transistors for switches have higher fan-outs.

Speed

The switching speed describes how long it takes a logic output to change from true to false or vice versa. Faster logic can accomplish more operations in less time. Modern electronic digital logic routinely switches at GHz, and some laboratory systems switch at more than THz.

Logic families

Digital design started with relay logic which is slow. Occasionally a mechanical failure would occur. Fan-outs were typically about 10, limited by the resistance of the coils and arcing on the contacts from high voltages.

Later, vacuum tubes were used. These were very fast, but generated heat, and were unreliable because the filaments would burn out. Fan-outs were typically 5 to 7, limited by the heating from the tubes' current. In the 1950s, special computer tubes were developed with filaments that omitted volatile elements like silicon. These ran for hundreds of thousands of hours.

The first semiconductor logic family was resistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-out of 3. Diode–transistor logic improved the fan-out up to about 7, and reduced the power. Some DTL designs used two power-supplies with alternating layers of NPN and PNP transistors to increase the fan-out.

Transistor–transistor logic (TTL) was a great improvement over these. In early devices, fan-out improved to 10, and later variations reliably achieved 20. TTL was also fast, with some variations achieving switching times as low as 20 ns. TTL is still used in some designs.

Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers, such as the Illiac IV, made up of many medium-scale components.

By far, the most common digital integrated circuits built today use CMOS logic, which is fast, offers high circuit density and low power per gate. This is used even in large, fast computers, such as the IBM System z.

Recent developments

In 2009, researchers discovered that memristors can implement a Boolean state storage and provides a complete logic family with very small amounts of space and power, using familiar CMOS semiconductor processes.

The discovery of superconductivity has enabled the development of rapid single flux quantum (RSFQ) circuit technology, which uses Josephson junctions instead of transistors. Most recently, attempts are being made to construct purely optical computing systems capable of processing digital information using nonlinear optical elements.

Carbon nanotube field-effect transistor

A carbon nanotube field-effect transistor (CNTFET) is a field-effect transistor that utilizes a single carbon nanotube (CNT) or an array of carbon nanotubes as the channel material, instead of bulk silicon, as in the traditional MOSFET structure. There have been major developments since CNTFETs were first demonstrated in 1998.

Background

A diagram showing that a carbon nanotube is essentially rolled up graphene

According to Moore's law, the dimensions of individual devices in an integrated circuit have been decreased by a factor of approximately two every two years. This scaling down of devices has been the driving force in technological advances since the late 20th century. However, as noted by ITRS 2009 edition, further scaling down has faced serious limits related to fabrication technology and device performances as the critical dimension shrunk down to sub-22 nm range. The limits involve electron tunneling through short channels and thin insulator films, the associated leakage currents, passive power dissipation, short channel effects, and variations in device structure and doping. These limits can be overcome to some extent and facilitate further scaling down of device dimensions by modifying the channel material in the traditional bulk MOSFET structure with a single carbon nanotube or an array of carbon nanotubes.

A carbon nanotube’s bandgap is directly affected by its chiral angle and diameter. If those properties can be controlled, CNTs would be a promising candidate for future nano-scale transistor devices. Moreover, because of the lack of boundaries in the perfect and hollow cylinder structure of CNTs, there is no boundary scattering. CNTs are also quasi-1D materials in which only forward scattering and back scattering are allowed, and elastic scattering means that free paths in carbon nanotubes are long, typically on the order of micrometers. As a result, quasi-ballistic transport can be observed in nanotubes at relatively long lengths and low fields. Because of the strong covalent carbon–carbon bonding in the sp2 configuration, carbon nanotubes are chemically inert and are able to transport large electric currents. In theory, carbon nanotubes are also able to conduct heat nearly as well as diamond or sapphire, and because of their miniaturized dimensions, the CNTFET should switch reliably using much less power than a silicon-based device.

Electronic structure of carbon nanotubes

Graphene atomic structure with a translational vector T and a chiral vector Ĉh of a CNT
One-dimensional energy dispersion relations for (a) (n,m)=(5,5) metallic tube, (b) (n,m)=(10,0) semiconducting tube.

To a first approximation, the exceptional electrical properties of carbon nanotubes can be viewed as inherited from the unique electronic structure of graphene, provided the carbon nanotube is thought of as graphene rolled up along one of its Bravais lattice vectors Ĉh to form a hollow cylinder. In this construction, periodic boundary conditions are imposed over Ĉh to yield a lattice of seamlessly bonded carbon atoms on the cylinder surface.

Thus, the circumference of such a carbon nanotube can be expressed in terms of its rollup vector: Ĉh=nâ1+mâ2 that connects two crystallographically equivalent sites of the two-dimensional graphene sheet. Here and are integers and â1 and â2 are primitive lattice vectors of the hexagonal lattice. Therefore, the structure of any carbon nanotube can be described by an index with a pair of integers that define its rollup vector. In terms of the integers , the nanotube diameter and the chiral angle are given by: ; and, , where is the C—C bond distance.

Differences in the chiral angle and the diameter cause the differences in the properties of the various carbon nanotubes. For example, it can be shown that an carbon nanotube is metallic when , is a small band gap semiconductor when and , and is a moderate band gap semiconductor when , where is an integer.

These results can be motivated by noting that periodic boundary conditions for 1D carbon nanotubes permit only a few wave vectors to exist around their circumferences. Metallic conduction could be expected to occur when one of these wave vectors passes through the K-point of graphene’s 2D hexagonal Brillouin zone, where the valence and conduction bands are degenerate.

This analysis, however, neglects the effects of curvature caused by rolling up the graphene sheet that converts all nanotubes with to small band gap semiconductors, with the exception of the armchair tubes () that remain metallic. Although the band gaps of carbon nanotubes with and are relatively small, some can still easily exceed room temperature, if the nanotube diameter is about a nanometer.

The band gaps of semiconducting carbon nanotubes with depend predominately on their diameters. In fact, according to a single-particle tight-binding description of the electronic structure of these nanotubes where is the nearest-neighbor hopping matrix element. That this result is an excellent approximation so long as is a lot less than one has been verified both by all-electron first principles local density functional calculations and experiment.

Scatter plots of the band gaps of carbon nanotubes with diameters up to three nanometers calculated using an all valence tight binding model that includes curvature effects appeared early in carbon nanotube research and were reprinted in a review.

Device fabrication

There are many types of CNTFET devices; a general survey of the most common geometries are covered below.

Back-gated CNTFETs

Top view
Top view
Side view
Side view

Top and side view of a silicon back-gated CNTFET. The CNTFET consists of carbon nanotubes deposited on a silicon oxide substrate pre-patterned with chromium/gold source and drain contacts.

The earliest techniques for fabricating carbon nanotube (CNT) field-effect transistors involved pre-patterning parallel strips of metal across a silicon dioxide substrate, and then depositing the CNTs on top in a random pattern. The semiconducting CNTs that happened to fall across two metal strips meet all the requirements necessary for a rudimentary field-effect transistor. One metal strip is the "source" contact while the other is the "drain" contact. The silicon oxide substrate can be used as the gate oxide and adding a metal contact on the back makes the semiconducting CNT gateable.

This technique suffered from several drawbacks, which made for non-optimized transistors. The first was the metal contact, which actually had very little contact to the CNT, since the nanotube just lay on top of it and the contact area was therefore very small. Also, due to the semiconducting nature of the CNT, a Schottky barrier forms at the metal–semiconductor interface, increasing the contact resistance. The second drawback was due to the back-gate device geometry. Its thickness made it difficult to switch the devices on and off using low voltages, and the fabrication process led to poor contact between the gate dielectric and CNT.

Top-gated CNTFETs

The process for fabricating a top-gated CNTFET.

Eventually, researchers migrated from the back-gate approach to a more advanced top-gate fabrication process. In the first step, single-walled carbon nanotubes are solution deposited onto a silicon oxide substrate. Individual nanotubes are then located via atomic force microscope or scanning electron microscope. After an individual tube is isolated, source and drain contacts are defined and patterned using high resolution electron beam lithography. A high temperature anneal step reduces the contact resistance by improving adhesion between the contacts and CNT. A thin top-gate dielectric is then deposited on top of the nanotube, either via evaporation or atomic layer deposition. Finally, the top gate contact is deposited on the gate dielectric, completing the process.

Arrays of top-gated CNTFETs can be fabricated on the same wafer, since the gate contacts are electrically isolated from each other, unlike in the back-gated case. Also, due to the thinness of the gate dielectric, a larger electric field can be generated with respect to the nanotube using a lower gate voltage. These advantages mean top-gated devices are generally preferred over back-gated CNTFETs, despite their more complex fabrication process.

Wrap-around gate CNTFETs

Sheathed CNT
Gate all-around CNT Device

Wrap-around gate CNTFETs, also known as gate-all-around CNTFETs were developed in 2008, and are a further improvement upon the top-gate device geometry. In this device, instead of gating just the part of the CNT that is closer to the metal gate contact, the entire circumference of the nanotube is gated. This should ideally improve the electrical performance of the CNTFET, reducing leakage current and improving the device on/off ratio.

Device fabrication begins by first wrapping CNTs in a gate dielectric and gate contact via atomic layer deposition. These wrapped nanotubes are then solution-deposited on an insulating substrate, where the wrappings are partially etched off, exposing the ends of the nanotube. The source, drain, and gate contacts are then deposited onto the CNT ends and the metallic outer gate wrapping.

Suspended CNTFETs

A suspended CNTFET device.

Yet another CNTFET device geometry involves suspending the nanotube over a trench to reduce contact with the substrate and gate oxide. This technique has the advantage of reduced scattering at the CNT-substrate interface, improving device performance. There are many methods used to fabricate suspended CNTFETs, ranging from growing them over trenches using catalyst particles, transferring them onto a substrate and then under-etching the dielectric beneath, and transfer-printing onto a trenched substrate.

The main problem suffered by suspended CNTFETs is that they have very limited material options for use as a gate dielectric (generally air or vacuum), and applying a gate bias has the effect of pulling the nanotube closer to the gate, which places an upper limit on how much the nanotube can be gated. This technique will also only work for shorter nanotubes, as longer tubes will flex in the middle and droop towards the gate, possibly touching the metal contact and shorting the device. In general, suspended CNTFETs are not practical for commercial applications, but they can be useful for studying the intrinsic properties of clean nanotubes.

CNTFET material considerations

There are general decisions one must make when considering what materials to use when fabricating a CNTFET. Semiconducting single-walled carbon nanotubes are preferred over metallic single-walled and metallic multi-walled tubes since they are able to be fully switched off, at least for low source/drain biases. A lot of work has been put into finding a suitable contact material for semiconducting CNTs; the best material to date is Palladium, because its work function closely matches that of nanotubes and it adheres to the CNTs quite well.

Characteristics

Field effect mobility of a back-gated CNTFET device with varying channel lengths. SiO2 is used as the gate dielectric. Tool: 'CNT Mobility' at nanoHUB.org

In CNT–metal contacts, the different work functions of the metal and the CNT result in a Schottky barrier at the source and drain, which are made of metals like silver, titanium, palladium and aluminum. Even though like Schottky barrier diodes, the barriers would have made this FET to transport only one type of carrier, the carrier transport through the metal-CNT interface is dominated by quantum mechanical tunneling through the Schottky barrier. CNTFETs can easily be thinned by the gate field such that tunneling through them results in a substantial current contribution. CNTFETs are ambipolar; either electrons or holes, or both electrons and holes can be injected simultaneously. This makes the thickness of the Schottky barrier a critical factor.

CNTFETs conduct electrons when a positive bias is applied to the gate and holes when a negative bias is applied, and drain current increases with increasing a magnitude of an applied gate voltage. Around Vg = Vds/2, the current gets the minimum due to the same amount of the electron and hole contributions to the current.

Like other FETs, the drain current increases with an increasing drain bias unless the applied gate voltage is below the threshold voltage. For planar CNTFETs with different design parameters, the FET with a shorter channel length produces a higher saturation current, and the saturation drain current also becomes higher for the FET consisting of smaller diameter keeping the length constant. For cylindrical CNTFETs, it is clear that a higher drain current is driven than that of planar CNTFETs since a CNT is surrounded by an oxide layer which is finally surrounded by a metal contact serving as the gate terminal.

Theoretical derivation of drain current

Structure of a top-gate CNT transistor

Theoretical investigation on drain current of the top-gate CNT transistor has been done by Kazierski and colleagues. When an electric field is applied to a CNT transistor, a mobile charge is induced in the tube from the source and drain. These charges are from the density of positive velocity states filled by the source NS and that of negative velocity states filled by the drain ND, and these densities are determined by the Fermi–Dirac probability distributions.

and the equilibrium electron density is

.

where the density of states at the channel D(E), USF, and UDF are defined as

The term, is 1 when the value inside the bracket is positive and 0 when negative. VSC is the self-consistent voltage that illustrates that the CNT energy is affected by external terminal voltages and is implicitly related to the device terminal voltages and charges at terminal capacitances by the following nonlinear equation:

where Qt represents the charge stored in terminal capacitances, and the total terminal capacitance CΣ is the sum of the gate, drain, source, and substrate capacitances shown in the figure above. The standard approach to the solution to the self-consistent voltage equation is to use the Newton–Raphson iterative method. According to the CNT ballistic transport theory, the drain current caused by the transport of the nonequilibrium charge across the nanotube can be calculated using the Fermi–Dirac statistics.

Here F0 represents the Fermi–Dirac integral of order 0, k is the Boltzmann constant, T is the temperature, and ℏ the reduced Planck constant. This equation can be solved easily as long as the self-consistent voltage is known. However the calculation could be time-consuming when it needs to solve the self-consistent voltage with the iterative method, and this is the main drawback of this calculation.

Heat dissipation

The decrease of the current and burning of the CNT can occur due to the temperature raised by several hundreds of kelvins. Generally, the self-heating effect is much less severe in a semiconducting CNTFET than in a metallic one due to different heat dissipation mechanisms. A small fraction of the heat generated in the CNTFET is dissipated through the channel. The heat is non-uniformly distributed, and the highest values appear at the source and drain sides of the channel. Therefore, the temperature significantly gets lowered near the source and drain regions. For semiconducting CNT, the temperature rise has a relatively small effect on the I–V characteristics compared to silicon.

Comparison to MOSFETs

CNTFETs show different characteristics compared to MOSFETs in their performances. In a planar gate structure, the p-CNTFET produces ~1500 A/m of the on-current per unit width at a gate overdrive of 0.6 V while p-MOSFET produces ~500 A/m at the same gate voltage. This on-current advantage comes from the high gate capacitance and improved channel transport. Since an effective gate capacitance per unit width of CNTFET is about double that of p-MOSFET, the compatibility with high-k gate dielectrics becomes a definite advantage for CNTFETs. About twice higher carrier velocity of CNTFETs than MOSFETs comes from the increased mobility and the band structure. CNTFETs, in addition, have about four times higher transconductance.

The first sub-10-nanometer CNT transistor was made which outperformed the best competing silicon devices with more than four times the diameter-normalized current density (2.41 mA/μm) at an operating voltage of 0.5 V. The inverse subthreshold slope of the CNTFET was 94 mV/decade.

Advantages

Disadvantages

Lifetime (degradation)

Carbon nanotubes have recently been shown to be stable in air for many months and likely more, even when under continual operation. While gate voltages are being applied, the device current can experience some undesirable drift/settling, but changes in gating quickly reset this behavior with little change in threshold voltage.

Reliability

Carbon nanotubes have shown reliability issues when operated under high electric field or temperature gradients. Avalanche breakdown occurs in semiconducting CNT and joule breakdown in metallic CNT. Unlike avalanche behavior in silicon, avalanche in CNTs is negligibly temperature-dependent. Applying high voltages beyond avalanche point results in Joule heating and eventual breakdown in CNTs. This reliability issue has been studied, and it is noticed that the multi-channeled structure can improve the reliability of the CNTFET. The multi-channeled CNTFETs can keep a stable performance after several months, while the single-channeled CNTFETs usually wear out after a few weeks in the ambient atmosphere. The multi-channeled CNTFETs keep operating when some channels break down, with a small change in electrical properties.

Difficulties in mass production, production cost

Although CNTs have unique properties such as stiffness, strength, and tenacity compared to other materials especially to silicon, there is currently no technology for their mass production, causing a high production cost. To overcome the fabrication difficulties, several methods have been studied such as direct growth, solution dropping, and various transfer printing techniques. The most promising methods for mass production involve some degree of self-assembly of pre-produced nanotubes into the desired positions. Individually manipulating many tubes is impractical at a large scale and growing them in their final positions presents many challenges.

Future work

The most desirable future work involved in CNTFETs will be the transistor with higher reliability, cheap production cost, or the one with more enhanced performances. For example: adding effects external to the inner CNT transistor like the Schottky barrier between the CNT and metal contacts, multiple CNTs at a single gate, channel fringe capacitances, parasitic source/drain resistance, and series resistance due to the scattering effects.

Hermitian adjoint

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hermitian_adjoint ...