Scanning probe microscopy (SPM) is a branch of microscopy
that forms images of surfaces using a physical probe that scans the
specimen. SPM was founded in 1981, with the invention of the scanning tunneling microscope,
an instrument for imaging surfaces at the atomic level. The first
successful scanning tunneling microscope experiment was done by Gerd Binnig and Heinrich Rohrer. The key to their success was using a feedback loop to regulate gap distance between the sample and the probe.
Many scanning probe microscopes can image several interactions
simultaneously. The manner of using these interactions to obtain an
image is generally called a mode.
The resolution varies somewhat from technique to technique, but
some probe techniques reach a rather impressive atomic resolution. This is due largely because piezoelectric actuators
can execute motions with a precision and accuracy at the atomic level
or better on electronic command. This family of techniques can be called
"piezoelectric techniques". The other common denominator is that the
data are typically obtained as a two-dimensional grid of data points,
visualized in false color as a computer image.
To form images, scanning probe microscopes raster scan
the tip over the surface. At discrete points in the raster scan a value
is recorded (which value depends on the type of SPM and the mode of
operation, see below). These recorded values are displayed as a heat map to produce the final STM images, usually using a black and white or an orange color scale.
Constant interaction mode
In
constant interaction mode (often referred to as "in feedback"), a
feedback loop is used to physically move the probe closer to or further
from the surface (in the z axis) under study to maintain a
constant interaction. This interaction depends on the type of SPM, for
scanning tunneling microscopy the interaction is the tunnel current, for
contact mode AFM or MFM it is the cantilever deflection, etc. The type of feedback loop used is usually a PI-loop, which is a PID-loop where the differential gain has been set to zero (as it amplifies noise). The z position of the tip (scanning plane is the xy-plane) is recorded periodically and displayed as a heat map. This is normally referred to as a topography image.
In this mode a second image, known as the ″error signal" or
"error image" is also taken, which is a heat map of the interaction
which was fed back on. Under perfect operation this image would be a
blank at a constant value which was set on the feedback loop. Under real
operation the image shows noise and often some indication of the
surface structure. The user can use this image to edit the feedback
gains to minimise features in the error signal.
If the gains are set incorrectly, many imaging artifacts are
possible. If gains are too low features can appear smeared. If the gains
are too high the feedback can become unstable and oscillate, producing
striped features in the images which are not physical.
Constant height mode
In constant height mode the probe is not moved in the z-axis
during the raster scan. Instead the value of the interaction under
study is recorded (i.e. the tunnel current for STM, or the cantilever
oscillation amplitude for amplitude modulated non-contact AFM). This
recorded information is displayed as a heat map, and is usually referred
to as a constant height image.
Constant height imaging is much more difficult than constant
interaction imaging as the probe is much more likely to crash into the
sample surface.
Usually before performing constant height imaging one must image in
constant interaction mode to check the surface has no large contaminants
in the imaging region, to measure and correct for the sample tilt, and
(especially for slow scans) to measure and correct for thermal drift of
the sample. Piezoelectric creep can also be a problem, so the microscope
often needs time to settle after large movements before constant height
imaging can be performed.
Constant height imaging can be advantageous for eliminating the possibility of feedback artifacts.
Probe tips
The nature of an SPM probe tip depends entirely on the type of SPM being used. The combination of tip shape and topography of the sample make up a SPM image. However, certain characteristics are common to all, or at least most, SPMs.
Most importantly the probe must have a very sharp apex.
The apex of the probe defines the resolution of the microscope, the
sharper the probe the better the resolution. For atomic resolution
imaging the probe must be terminated by a single atom.
For many cantilever based SPMs (e.g. AFM and MFM), the entire cantilever and integrated probe are fabricated by acid [etching], usually from silicon nitride. Conducting probes, needed for STM and SCM among others, are usually constructed from platinum/iridium wire for ambient operations, or tungsten for UHV
operation. Other materials such as gold are sometimes used either for
sample specific reasons or if the SPM is to be combined with other
experiments such as TERS.
Platinum/iridium (and other ambient) probes are normally cut using
sharp wire cutters, the optimal method is to cut most of the way through
the wire and then pull to snap the last of the wire, increasing the
likelihood of a single atom termination. Tungsten wires are usually
electrochemically etched, following this the oxide layer normally needs
to be removed once the tip is in UHV conditions.
It is not uncommon for SPM probes (both purchased and
"home-made") to not image with the desired resolution. This could be a
tip which is too blunt or the probe may have more than one peak,
resulting in a doubled or ghost image. For some probes, in situ
modification of the tip apex is possible, this is usually done by either
crashing the tip into the surface or by applying a large electric
field. The latter is achieved by applying a bias voltage (of order 10V)
between the tip and the sample, as this distance is usually 1-3 Angstroms, a very large field is generated.
The additional attachment of a quantum dot to the tip apex of a
conductive probe enables surface potential imaging with high lateral
resolution, scanning quantum dot microscopy.
Advantages
The resolution of the microscopes is not limited by diffraction, only by the size of the probe-sample interaction volume (i.e., point spread function), which can be as small as a few picometres.
Hence the ability to measure small local differences in object height
(like that of 135 picometre steps on <100> silicon) is
unparalleled. Laterally the probe-sample interaction extends only across
the tip atom or atoms involved in the interaction.
The interaction can be used to modify the sample to create small structures (Scanning probe lithography).
Unlike electron microscope methods, specimens do not require a
partial vacuum but can be observed in air at standard temperature and
pressure or while submerged in a liquid reaction vessel.
Disadvantages
The
detailed shape of the scanning tip is sometimes difficult to determine.
Its effect on the resulting data is particularly noticeable if the
specimen varies greatly in height over lateral distances of 10 nm or
less.
The scanning techniques are generally slower in acquiring images,
due to the scanning process. As a result, efforts are being made to
greatly improve the scanning rate. Like all scanning techniques, the
embedding of spatial information into a time sequence opens the door to
uncertainties in metrology, say of lateral spacings and angles, which
arise due to time-domain effects like specimen drift, feedback loop
oscillation, and mechanical vibration.
The maximum image size is generally smaller.
Scanning probe microscopy is often not useful for examining buried solid-solid or liquid-liquid interfaces.
Scanning photo current microscopy (SPCM)
SPCM
can be considered as a member of the Scanning Probe Microscopy (SPM)
family. The difference between other SPM techniques and SPCM is, it
exploits a focused laser beam as the local excitation source instead of a
probe tip.
Characterization and analysis of spatially resolved optical
behavior of materials is very important in opto-electronic industry.
Simply this involves studying how the properties of a material vary
across its surface or bulk structure. Techniques that enable spatially
resolved optoelectronic measurements provide valuable insights for the
enhancement of optical performance. Scanning electron microscopy (SPCM)
has emerged as a powerful technique which can investigate spatially
resolved optoelectronic properties in semiconductor nano structures.
Principle
In SPCM, a focused laser beam is used to excite the semiconducting
material producing excitons (electro-hole pairs). These excitons undergo
different mechanisms and if they can reach the nearby electrodes before
the recombination takes place a photocurrent is generated. This
photocurrent is position dependent as it, raster scans the device.
SPCM analysis
Using the position dependent photocurrent map, important photocurrent dynamics can be analyzed.
SPCM provides information such as characteristic length such as
minority diffusion length, recombination dynamics, doping concentration,
internal electric field etc.
Visualization and analysis software
In
all instances and contrary to optical microscopes, rendering software
is necessary to produce images.
Such software is produced and embedded by instrument manufacturers but
also available as an accessory from specialized work groups or
companies.
The main packages used are freeware: Gwyddion, WSxM (developed by Nanotec) and commercial: SPIP (developed by Image Metrology), FemtoScan Online (developed by Advanced Technologies Center), MountainsMap SPM (developed by Digital Surf), TopoStitch (developed by Image Metrology).
Kelvin probe force microscopy (KPFM), also known as surface potential microscopy, is a noncontact variant of atomic force microscopy (AFM). By raster scanning
in the x,y plane the work function of the sample can be locally mapped
for correlation with sample features. When there is little or no
magnification, this approach can be described as using a scanning Kelvin probe (SKP). These techniques are predominantly used to measure corrosion and coatings.
With KPFM, the work function of surfaces can be observed at atomic or molecular scales. The work function relates to many surface phenomena, including catalytic activity, reconstruction of surfaces, doping and band-bending of semiconductors, charge trapping in dielectrics and corrosion.
The map of the work function produced by KPFM gives information about
the composition and electronic state of the local structures on the
surface of a solid.
History
The SKP technique is based on parallel plate capacitor experiments performed by Lord Kelvin in 1898. In the 1930s William Zisman built upon Lord Kelvin's experiments to develop a technique to measure contact potential differences of dissimilar metals.
Working principle
In SKP the probe and sample are held parallel to each other and
electrically connected to form a parallel plate capacitor. The probe is
selected to be of a different material to the sample, therefore each
component initially has a distinct Fermi level. When electrical connection is made between the probe and the sample electron flow can occur between the probe and the sample in the direction of the higher to the lower Fermi level. This electron flow causes the equilibration of the probe and sample Fermi levels. Furthermore, a surface charge develops on the probe and the sample, with a related potential difference known as the contact potential (Vc). In SKP the probe is vibrated along a perpendicular to the plane of the sample.
This vibration causes a change in probe to sample distance, which in
turn results in the flow of current, taking the form of an ac sine wave. The resulting ac sine wave is demodulated to a dc signal through the use of a lock-in amplifier.
Typically the user must select the correct reference phase value used
by the lock-in amplifier. Once the dc potential has been determined, an
external potential, known as the backing potential (Vb) can
be applied to null the charge between the probe and the sample. When the
charge is nullified, the Fermi level of the sample returns to its
original position. This means that Vb is equal to -Vc, which is the work function difference between the SKP probe and the sample measured.
The cantilever in the AFM is a reference electrode
that forms a capacitor with the surface, over which it is scanned
laterally at a constant separation. The cantilever is not
piezoelectrically driven at its mechanical resonance frequency ω0 as in normal AFM although an alternating current (AC) voltage is applied at this frequency.
When there is a direct-current (DC) potential difference between
the tip and the surface, the AC+DC voltage offset will cause the
cantilever to vibrate. The origin of the force can be understood by
considering that the energy of the capacitor formed by the cantilever
and the surface is
plus terms at DC. Only the cross-term proportional to the VDC·VAC product is at the resonance frequency ω0.
The resulting vibration of the cantilever is detected using usual
scanned-probe microscopy methods (typically involving a diode laser and a
four-quadrant detector). A null circuit is used to drive the DC
potential of the tip to a value which minimizes the vibration. A map of
this nulling DC potential versus the lateral position coordinate
therefore produces an image of the work function of the surface.
A related technique, electrostatic force microscopy
(EFM), directly measures the force produced on a charged tip by the
electric field emanating from the surface. EFM operates much like magnetic force microscopy
in that the frequency shift or amplitude change of the cantilever
oscillation is used to detect the electric field. However, EFM is much
more sensitive to topographic artifacts than KPFM. Both EFM and KPFM
require the use of conductive cantilevers, typically metal-coated silicon or silicon nitride. Another AFM-based technique for the imaging of electrostatic surface potentials, scanning quantum dot microscopy, quantifies surface potentials based on their ability to gate a tip-attached quantum dot.
Factors affecting SKP measurements
The
quality of an SKP measurement is affected by a number of factors. This
includes the diameter of the SKP probe, the probe to sample distance,
and the material of the SKP probe. The probe diameter is important in
the SKP measurement because it affects the overall resolution of the
measurement, with smaller probes leading to improved resolution.
On the other hand, reducing the size of the probe causes an increase in
fringing effects which reduces the sensitivity of the measurement by
increasing the measurement of stray capacitances. The material used in the construction of the SKP probe is important to the quality of the SKP measurement.
This occurs for a number of reasons. Different materials have different
work function values which will affect the contact potential measured.
Different materials have different sensitivity to humidity changes. The
material can also affect the resulting lateral resolution of the SKP measurement. In commercial probes tungsten is used, though probes of platinum, copper, gold, and NiCr has been used.
The probe to sample distance affects the final SKP measurement, with
smaller probe to sample distances improving the lateral resolution and the signal-to-noise ratio of the measurement.
Furthermore, reducing the SKP probe to sample distance increases the
intensity of the measurement, where the intensity of the measurement is
proportional to 1/d2, where d is the probe to sample distance. The effects of changing probe to sample distance on the measurement can be counteracted by using SKP in constant distance mode.
Work function
The
Kelvin probe force microscope or Kelvin force microscope (KFM) is based
on an AFM set-up and the determination of the work function is based on
the measurement of the electrostatic forces between the small AFM tip
and the sample. The conducting tip and the sample are characterized by
(in general) different work functions, which represent the difference
between the Fermi level and the vacuum level
for each material. If both elements were brought in contact, a net
electric current would flow between them until the Fermi levels were
aligned. The difference between the work functions is called the contact potential difference and is denoted generally with VCPD.
An electrostatic force exists between tip and sample, because of the
electric field between them. For the measurement a voltage is applied
between tip and sample, consisting of a DC-bias VDC and an AC-voltage VAC sin(ωt) of frequency ω.
Tuning the AC-frequency to the resonant frequency
of the AFM cantilever results in an improved sensitivity. The
electrostatic force in a capacitor may be found by differentiating the
energy function with respect to the separation of the elements and can
be written as
where C is the capacitance, z is the separation, and V
is the voltage, each between tip and surface. Substituting the previous
formula for voltage (V) shows that the electrostatic force can be split
up into three contributions, as the total electrostatic force F acting on the tip then has spectral components at the frequencies ω and 2ω.
The DC component, FDC, contributes to the topographical signal, the term Fω at the characteristic frequency ω is used to measure the contact potential and the contribution F2ω can be used for capacitance microscopy.
Contact potential measurements
For contact potential measurements a lock-in amplifier is used to detect the cantilever oscillation at ω. During the scan VDC
will be adjusted so that the electrostatic forces between the tip and
the sample become zero and thus the response at the frequency ω becomes
zero. Since the electrostatic force at ω depends on VDC − VCPD, the value of VDC that minimizes the ω-term
corresponds to the contact potential. Absolute values of the sample
work function can be obtained if the tip is first calibrated against a
reference sample of known work function. Apart from this, one can use the normal topographic scan methods at the resonance frequency ω
independently of the above. Thus, in one scan, the topography and the
contact potential of the sample are determined simultaneously.
This can be done in (at least) two different ways: 1) The topography is
captured in AC mode which means that the cantilever is driven by a piezo
at its resonant frequency. Simultaneously the AC voltage for the KPFM
measurement is applied at a frequency slightly lower than the resonant
frequency of the cantilever. In this measurement mode the topography and
the contact potential difference are captured at the same time and this
mode is often called single-pass. 2) One line of the topography is
captured either in contact or AC mode and is stored internally. Then,
this line is scanned again, while the cantilever remains on a defined
distance to the sample without a mechanically driven oscillation but the
AC voltage of the KPFM measurement is applied and the contact potential
is captured as explained above. It is important to note that the
cantilever tip must not be too close to the sample in order to allow
good oscillation with applied AC voltage. Therefore, KPFM can be
performed simultaneously during AC topography measurements but not
during contact topography measurements.
Applications
The Volta potential measured by SKP is directly proportional to the corrosion potential of a material,
as such SKP has found widespread use in the study of the fields of
corrosion and coatings. In the field of coatings for example, a
scratched region of a self-healing shape memory polymer coating containing a heat generating agent on aluminium alloys was measured by SKP.
Initially after the scratch was made the Volta potential was noticeably
higher and wider over the scratch than over the rest of the sample,
implying this region is more likely to corrode. The Volta potential
decreased over subsequent measurements, and eventually the peak over the
scratch completely disappeared implying the coating has healed. Because
SKP can be used to investigate coatings in a non-destructive way it has
also been used to determine coating failure. In a study of polyurethane coatings, it was seen that the work function increases with increasing exposure to high temperature and humidity. This increase in work function is related to decomposition of the coating likely from hydrolysis of bonds within the coating.
Using SKP the corrosion of industrially important alloys has been measured. In particular with SKP it is possible to investigate the effects of environmental stimulus on corrosion. For example, the microbially induced corrosion of stainless steel and titanium has been examined.
SKP is useful to study this sort of corrosion because it usually occurs
locally, therefore global techniques are poorly suited. Surface
potential changes related to increased localized corrosion were shown by
SKP measurements. Furthermore, it was possible to compare the resulting
corrosion from different microbial species. In another example SKP was
used to investigate biomedical alloy materials, which can be corroded within the human body. In studies on Ti-15Mo under inflammatory conditions, SKP measurements showed a lower corrosion resistance at the bottom of a corrosion pit than at the oxide
protected surface of the alloy. SKP has also been used to investigate
the effects of atmospheric corrosion, for example to investigate copper
alloys in marine environment.
In this study Kelvin potentials became more positive, indicating a more
positive corrosion potential, with increased exposure time, due to an
increase in thickness of corrosion products. As a final example SKP was
used to investigate stainless steel under simulated conditions of gas
pipeline. These measurements showed an increase in difference in corrosion potential of cathodic and anodic
regions with increased corrosion time, indicating a higher likelihood
of corrosion. Furthermore, these SKP measurements provided information
about local corrosion, not possible with other techniques.
SKP has been used to investigate the surface potential of materials used in solar cells, with the advantage that it is a non-contact, and therefore a non-destructive technique. It can be used to determine the electron affinity of different materials in turn allowing the energy level overlap of conduction bands
of differing materials to be determined. The energy level overlap of
these bands is related to the surface photovoltage response of a system.
As a non-contact, non-destructive technique SKP has been used to investigate latent fingerprints on materials of interest for forensic studies.
When fingerprints are left on a metallic surface they leave behind
salts which can cause the localized corrosion of the material of
interest. This leads to a change in Volta potential of the sample, which
is detectable by SKP. SKP is particularly useful for these analyses
because it can detect this change in Volta potential even after heating,
or coating by, for example, oils.
SKP has been used to analyze the corrosion mechanisms of schreibersite-containing meteorites. The aim of these studies has been to investigate the role in such meteorites in releasing species utilized in prebiotic chemistry.
Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics which work primarily with analog signals. Despite the name, digital electronics designs includes important analog design considerations.
Mechanicalanalog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II,
mechanical analog computers were used for specialized military
applications such as calculating torpedo aiming. During this time the
first electronic digital computers were developed, with the term digital being proposed by George Stibitz in 1942. Originally they were the size of a large room, consuming as much power as several hundred modern PCs.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes. Their "transistorised computer", and the first in the world, was operational by 1953,
and a second version was completed there in April 1955. From 1955 and
onwards, transistors replaced vacuum tubes in computer designs, giving
rise to the "second generation" of computers. Compared to vacuum tubes,
transistors were smaller, more reliable, had indefinite lifespans, and
required less power than vacuum tubes - thereby giving off less heat,
and allowing much denser concentrations of circuits, up to tens of
thousands in a relatively compact space.
In the early days of integrated circuits,
each chip was limited to only a few transistors, and the low degree of
integration meant the design process was relatively simple.
Manufacturing yields were also quite low by today's standards. The wide
adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip. Following the wide adoption of CMOS,
a type of MOSFET logic, by the 1980s, millions and then billions of
MOSFETs could be placed on one chip as the technology progressed, and good designs required thorough planning, giving rise to new design methods. The transistor count
of devices and total production rose to unprecedented heights. The
total amount of transistors produced until 2018 has been estimated to be
1.3×1022 (13sextillion).
An
advantage of digital circuits when compared to analog circuits is that
signals represented digitally can be transmitted without degradation
caused by noise.
For example, a continuous audio signal transmitted as a sequence of 1s
and 0s, can be reconstructed without error, provided the noise picked up
in transmission is not enough to prevent identification of the 1s and
0s.
In a digital system, a more precise representation of a signal
can be obtained by using more binary digits to represent it. While this
requires more digital circuits to process the signals, each digit is
handled by the same kind of hardware, resulting in an easily scalable
system. In an analog system, additional resolution requires fundamental
improvements in the linearity and noise characteristics of each step of
the signal chain.
With computer-controlled digital systems, new functions can be
added through software revision and no hardware changes are needed.
Often this can be done outside of the factory by updating the product's
software. This way, the product's design errors can be corrected even
after the product is in a customer's hands.
Information storage can be easier in digital systems than in
analog ones. The noise immunity of digital systems permits data to be
stored and retrieved without degradation. In an analog system, noise
from aging and wear degrade the information stored. In a digital system,
as long as the total noise is below a certain level, the information
can be recovered perfectly. Even when more significant noise is present,
the use of redundancy permits the recovery of the original data provided too many errors do not occur.
In some cases, digital circuits use more energy than analog
circuits to accomplish the same tasks, thus producing more heat which
increases the complexity of the circuits such as the inclusion of heat
sinks. In portable or battery-powered systems this can limit the use of
digital systems. For example, battery-powered cellular phones often use a low-power analog front-end to amplify and tune the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can easily be reprogrammed to process the signals used in new cellular standards.
Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.
If a single piece of digital data is lost or misinterpreted, in
some systems only a small error may result, while in other systems the
meaning of large blocks of related data can completely change. For
example, a single-bit error in audio data stored directly as linear pulse-code modulation causes, at worst, a single audible click. But when using audio compression to save storage space and transmission time, a single bit error may cause a much larger disruption.
Because of the cliff effect,
it can be difficult for users to tell if a particular system is right
on the edge of failure, or if it can tolerate much more noise before
failing. Digital fragility can be reduced by designing a digital system
for robustness. For example, a parity bit
or other error management method can be inserted into the signal path.
These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data.
A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of Boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates.
Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices",
though other kinds of PLDs exist). Lookup tables can perform the same
functions as machines based on logic gates, but can be easily
reprogrammed without changing the wiring. This means that a designer can
often repair design errors without changing the arrangement of wires.
Therefore, in small volume products, programmable logic devices are
often the preferred solution. They are usually designed by engineers
using electronic design automation software.
Integrated circuits
consist of multiple transistors on one silicon chip, and are the least
expensive way to make large number of interconnected logic gates.
Integrated circuits are usually interconnected on a printed circuit board which is a board which holds electrical components, and connects them together with copper traces.
A digital circuit's input-output relationship can be represented as a truth table. An equivalent high-level circuit uses logic gates, each represented by a different shape (standardized by IEEE/ANSI 91–1984). A low-level representation uses an equivalent circuit of electronic switches (usually transistors).
Most digital systems divide into combinational and sequential systems.
The output of a combinational system depends only on the present
inputs. However, a sequential system has some of its outputs fed back as
inputs, so its output may depend on past inputs in addition to present
inputs, to produce a sequence of operations. Simplified representations of their behavior called state machines facilitate design and test.
The usual way to implement a synchronous sequential state machine is
to divide it into a piece of combinational logic and a set of flip flops
called a state register. The state register represents the state
as a binary number. The combinational logic produces the binary
representation for the next state. On each clock cycle, the state
register captures the feedback generated from the previous state of the
combinational logic and feeds it back as an unchanging input to the
combinational part of the state machine. The clock rate is limited by
the most time-consuming logic calculation in the combinational logic.
Asynchronous systems
Most
digital logic is synchronous because it is easier to create and verify a
synchronous design. However, asynchronous logic has the advantage of
its speed not being constrained by an arbitrary clock; instead, it runs
at the maximum speed of its logic gates. Nevertheless, most systems need to accept external unsynchronized
signals into their synchronous logic circuits. This interface is
inherently asynchronous and must be analyzed as such. Examples of widely
used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters.
Asynchronous logic components can be hard to design because all
possible states, in all possible timings must be considered. The usual
method is to construct a table of the minimum and maximum time that each
such state can exist and then adjust the circuit to minimize the number
of such states. The designer must force the circuit to periodically
wait for all of its parts to enter a compatible state (this is called
"self-resynchronization"). Without careful design, it is easy to
accidentally produce asynchronous logic that is unstable—that is—real
electronics will have unpredictable results because of the cumulative
delays caused by small variations in the values of the electronic
components.
In register transfer logic, binary numbers are stored in groups of flip flops called registers.
A sequential state machine controls when each register accepts new data
from its input. The outputs of each register are a bundle of wires
called a bus
that carries that number to other calculations. A calculation is simply
a piece of combinational logic. Each calculation also has an output
bus, and these may be connected to the inputs of several registers.
Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses.
Asynchronous register-transfer systems (such as computers) have a
general solution. In the 1980s, some researchers discovered that almost
all synchronous register-transfer machines could be converted to
asynchronous designs by using first-in-first-out synchronization logic.
In this scheme, the digital machine is characterized as a set of data
flows. In each step of the flow, a synchronization circuit determines
when the outputs of that step are valid and instructs the next stage
when to use these outputs.
Computer design
The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer.
A microprogram is much like a player-piano roll. Each table entry of
the microprogram commands the state of every bit that controls the
computer. The sequencer then counts, and the count addresses the memory
or combinational logic machine that contains the microprogram. The bits
from the microprogram control the arithmetic logic unit, memory
and other parts of the computer, including the microsequencer itself.
In this way, the complex task of designing the controls of a computer is
reduced to the simpler task of programming a collection of much simpler
logic machines.
Almost all computers are synchronous. However, asynchronous computers have also been built. One example is the ASPIDA DLX core. Another was offered by ARM Holdings.
They do not, however, have any speed advantages because modern computer
designs already run at the speed of their slowest component, usually
memory. They do use somewhat less power because a clock distribution
network is not needed. An unexpected advantage is that asynchronous
computers do not produce spectrally-pure radio noise. They are used in
some radio-sensitive mobile-phone base-station controllers. They may be
more secure in cryptographic applications because their electrical and
radio emissions can be more difficult to decode.
Computer architecture
Computer architecture
is a specialized engineering activity that tries to arrange the
registers, calculation logic, buses and other parts of the computer in
the best way possible for a specific purpose. Computer architects have
put a lot of work into reducing the cost and increasing the speed of
computers in addition to boosting their immunity to programming errors.
An increasingly common goal of computer architects is to reduce the
power used in battery-powered computer systems, such as smartphones.
Design issues in digital circuits
Digital circuits are made from analog components. The design must
assure that the analog nature of the components does not dominate the
desired digital behavior. Digital systems must manage noise and timing
margins, parasitic inductances and capacitances.
Bad designs have intermittent problems such as glitches, vanishingly fast pulses that may trigger some logic but not others, runt pulses that do not reach valid threshold voltages.
Additionally, where clocked digital systems interface to analog
systems or systems that are driven from a different clock, the digital
system can be subject to metastability where a change to the input violates the setup time for a digital input latch.
Since digital circuits are made from analog components, digital
circuits calculate more slowly than low-precision analog circuits that
use a similar amount of space and power. However, the digital circuit
will calculate more repeatably, because of its high noise immunity.
Automated design tools
Much of the effort of designing large logic machines has been automated through the application of electronic design automation (EDA).
To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinational logic
of a state machine. The state table is a piece of text that lists each
state, together with the conditions controlling the transitions between
them and their associated output signals.
Often, real logic systems are designed as a series of sub-projects, which are combined using a tool flow. The tool flow is usually controlled with the help of a scripting language,
a simplified computer language that can invoke the software design
tools in the right order. Tool flows for large logic systems such as microprocessors
can be thousands of commands long, and combine the work of hundreds of
engineers. Writing and debugging tool flows is an established
engineering specialty in companies that produce digital designs. The
tool flow usually terminates in a detailed computer file or set of files
that describe how to physically construct the logic. Often it consists
of instructions on how to draw the transistors and wires on an integrated circuit or a printed circuit board.
Parts of tool flows are debugged by verifying the outputs of
simulated logic against expected inputs. The test tools take computer
files with sets of inputs and outputs and highlight discrepancies
between the simulated behavior and the expected behavior. Once the input
data is believed to be correct, the design itself must still be
verified for correctness. Some tool flows verify designs by first
producing a design, then scanning the design to produce compatible input
data for the tool flow. If the scanned data matches the input data,
then the tool flow has probably not introduced errors.
The functional verification data are usually called test vectors.
The functional test vectors may be preserved and used in the factory to
test whether newly constructed logic works correctly. However,
functional test patterns do not discover all fabrication faults.
Production tests are often designed by automatic test pattern generation
software tools. These generate test vectors by examining the structure
of the logic and systematically generating tests targeting particular
potential faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section).
Once a design exists, and is verified and testable, it often
needs to be processed to be manufacturable as well. Modern integrated
circuits have features smaller than the wavelength of the light used to
expose the photoresist. Software that are designed for manufacturability add interference patterns to the exposure masks to eliminate open-circuits, and enhance the masks' contrast.
Design for testability
There
are several reasons for testing a logic circuit. When the circuit is
first developed, it is necessary to verify that the design circuit meets
the required functional, and timing specifications. When multiple
copies of a correctly designed circuit are being manufactured, it is
essential to test each copy to ensure that the manufacturing process has
not introduced any flaws.
A large logic machine (say, with more than a hundred logical
variables) can have an astronomical number of possible states.
Obviously, factory testing every state of such a machine is unfeasible,
for even if testing each state only took a microsecond, there are more
possible states than there are microseconds since the universe began!
Large logic machines are almost always designed as assemblies of
smaller logic machines. To save time, the smaller sub-machines are
isolated by permanently installed design for test circuitry, and
are tested independently. One common testing scheme provides a test mode
that forces some part of the logic machine to enter a test cycle. The test cycle usually exercises large independent parts of the machine.
Boundary scan is a common test scheme that uses serial communication with external test equipment through one or more shift registers known as scan chains.
Serial scans have only one or two wires to carry the data, and minimize
the physical size and expense of the infrequently used test logic.
After all the test data bits are in place, the design is reconfigured to
be in normal mode and one or more clock pulses are applied, to
test for faults (e.g. stuck-at low or stuck-at high) and capture the
test result into flip-flops or latches in the scan shift register(s).
Finally, the result of the test is shifted out to the block boundary and
compared against the predicted good machine result.
In a board-test environment, serial to parallel testing has been formalized as the JTAG standard.
Trade-offs
Cost
Since
a digital system may use many logic gates, the overall cost of building
a computer correlates strongly with the cost of a logic gate. In the
1930s, the earliest digital logic systems were constructed from
telephone relays because these were inexpensive and relatively reliable.
The earliest integrated circuits were constructed to save weight and permit the Apollo Guidance Computer to control an inertial guidance system
for a spacecraft. The first integrated circuit logic gates cost nearly
US$50, which in 2023 would be equivalent to $515. Mass-produced gates on
integrated circuits became the least-expensive method to construct
digital logic.
With the rise of integrated circuits,
reducing the absolute number of chips used represented another way to
save costs. The goal of a designer is not just to make the simplest
circuit, but to keep the component count down. Sometimes this results in
more complicated designs with respect to the underlying digital logic
but nevertheless reduces the number of components, board size, and even
power consumption.
Reliability
Another
major motive for reducing component count on printed circuit boards is
to reduce the manufacturing defect rate due to failed soldered
connections and increase reliability. Defect and failure rates tend to
increase along with the total number of component pins.
The failure of a single logic gate may cause a digital machine to
fail. Where additional reliability is required, redundant logic can be
provided. Redundancy adds cost and power consumption over a
non-redundant system.
The reliability of a logic gate can be described by its mean time between failure
(MTBF). Digital machines first became useful when the MTBF for a switch
increased above a few hundred hours. Even so, many of these machines
had complex, well-rehearsed repair procedures, and would be
nonfunctional for hours because a tube burned-out, or a moth got stuck
in a relay. Modern transistorized integrated circuit logic gates have
MTBFs greater than 82 billion hours (8.2×1010 h). This level of reliability is required because integrated circuits have so many logic gates.
Fan-out
Fan-out
describes how many logic inputs can be controlled by a single logic
output without exceeding the electrical current ratings of the gate
outputs. The minimum practical fan-out is about five. Modern electronic logic gates using CMOS transistors for switches have higher fan-outs.
Speed
The switching speed
describes how long it takes a logic output to change from true to false
or vice versa. Faster logic can accomplish more operations in less
time. Modern electronic digital logic routinely switches at 5 GHz, and some laboratory systems switch at more than 1 THz.
Digital design started with relay logic
which is slow. Occasionally a mechanical failure would occur. Fan-outs
were typically about 10, limited by the resistance of the coils and
arcing on the contacts from high voltages.
Later, vacuum tubes
were used. These were very fast, but generated heat, and were
unreliable because the filaments would burn out. Fan-outs were typically
5 to 7, limited by the heating from the tubes' current. In the 1950s,
special computer tubes were developed with filaments that omitted
volatile elements like silicon. These ran for hundreds of thousands of
hours.
The first semiconductor logic family was resistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-out of 3. Diode–transistor logic
improved the fan-out up to about 7, and reduced the power. Some DTL
designs used two power-supplies with alternating layers of NPN and PNP
transistors to increase the fan-out.
Transistor–transistor logic
(TTL) was a great improvement over these. In early devices, fan-out
improved to 10, and later variations reliably achieved 20. TTL was also
fast, with some variations achieving switching times as low as 20 ns.
TTL is still used in some designs.
Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers, such as the Illiac IV, made up of many medium-scale components.
By far, the most common digital integrated circuits built today use CMOS logic, which is fast, offers high circuit density and low power per gate. This is used even in large, fast computers, such as the IBM System z.
Recent developments
In 2009, researchers discovered that memristors
can implement a Boolean state storage and provides a complete logic
family with very small amounts of space and power, using familiar CMOS
semiconductor processes.