Search This Blog

Tuesday, May 21, 2024

Electronic design automation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Electronic_design_automation

Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs).

History

Early days

The earliest electronic design automation is attributed to IBM with the documentation of its 700 series computers in the 1950s.

Prior to the development of EDA, integrated circuits were designed by hand and manually laid out. Some advanced shops used geometric software to generate tapes for a Gerber photoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era was Calma, whose GDSII format is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the first placement and routing tools were developed; as this occurred, the proceedings of the Design Automation Conference catalogued the large majority of the developments of the time.

The next era began following the publication of "Introduction to VLSI Systems" by Carver Mead and Lynn Conway in 1980; considered the standard textbook for chip design. The result was an increase in the complexity of the chips that could be designed, with improved access to design verification tools that used logic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools have evolved, this general approach of specifying the desired behavior in a textual programming language and letting the tools derive the detailed physical design remains the basis of digital IC design today.

The earliest EDA tools were produced academically. One of the most famous was the "Berkeley VLSI Tools Tarball", a set of UNIX utilities used to design early VLSI systems. Widely used were the Espresso heuristic logic minimizer, responsible for circuit complexity reductions and Magic, a computer-aided design platform. Another crucial development was the formation of MOSIS, a consortium of universities and fabricators that developed an inexpensive way to train student chip designers by producing real integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC processes and pack a large number of projects per wafer, with several copies of chips from each project remaining preserved. Cooperating fabricators either donated the processed wafers or sold them at cost, as they saw the program as helpful to their own long-term growth.

Commercial birth

1981 marked the beginning of EDA as an industry. For many years, the larger electronic companies, such as Hewlett-Packard, Tektronix and Intel, had pursued EDA internally, with managers and developers beginning to spin out of these companies to concentrate on EDA as a business. Daisy Systems, Mentor Graphics and Valid Logic Systems were all founded around this time and collectively referred to as DMV. In 1981, the U.S. Department of Defense additionally began funding of VHDL as a hardware description language. Within a few years, there were many companies specializing in EDA, each with a slightly different emphasis.

The first trade show for EDA was held at the Design Automation Conference in 1984 and in 1986, Verilog, another popular high-level design language, was first introduced as a hardware description language by Gateway Design Automation. Simulators quickly followed these introductions, permitting direct simulation of chip designs and executable specifications. Within several years, back-ends were developed to perform logic synthesis.

Modern day

Current digital flows are extremely modular, with front ends producing standardized design descriptions that compile into invocations of units similar to cells without regard to their individual technology. Cells implement logic or other electronic functions via the utilisation of a particular integrated circuit technology. Fabricators generally provide libraries of components for their production processes, with simulation models that fit standard simulation tools.

Most analog circuits are still designed in a manual fashion, requiring specialist knowledge that is unique to analog design (such as matching concepts). Hence, analog EDA tools are far less modular, since many more functions are required, they interact more strongly and the components are, in general, less ideal.

EDA for electronics has rapidly increased in importance with the continuous scaling of semiconductor technology. Some users are foundry operators, who operate the semiconductor fabrication facilities ("fabs") and additional individuals responsible for utilising the technology design-service companies who use EDA software to evaluate an incoming design for manufacturing readiness. EDA tools are also used for programming design functionality into FPGAs or field-programmable gate arrays, customisable integrated circuit designs.

Software focuses

Design

Design flow primarily remains characterised via several primary components; these include:

  • High-level synthesis (additionally known as behavioral synthesis or algorithmic synthesis) – The high-level design description (e.g. in C/C++) is converted into RTL or the register transfer level, responsible for representing circuitry via the utilisation of interactions between registers.
  • Logic synthesis – The translation of RTL design description (e.g. written in Verilog or VHDL) into a discrete netlist or representation of logic gates.
  • Schematic capture – For standard cell digital, analog, RF-like Capture CIS in Orcad by Cadence and ISIS in Proteus.
  • Layout – usually schematic-driven layout, like Layout in Orcad by Cadence, ARES in Proteus

Simulation

  • Transistor simulation – low-level transistor-simulation of a schematic/layout's behavior, accurate at device-level.
  • Logic simulation – digital-simulation of an RTL or gate-netlist's digital (Boolean 0/1) behavior, accurate at Boolean-level.
  • Behavioral simulation – high-level simulation of a design's architectural operation, accurate at cycle-level or interface-level.
  • Hardware emulation – Use of special purpose hardware to emulate the logic of a proposed design. Can sometimes be plugged into a system in place of a yet-to-be-built chip; this is called in-circuit emulation.
  • Technology CAD simulate and analyze the underlying process technology. Electrical properties of devices are derived directly from device physics
Schematic capture program

Analysis and verification

  • Functional verification: ensures logic design matches specifications and executes tasks correctly. Includes dynamic functional verification via simulation, emulation, and prototypes.
  • RTL Linting for adherence to coding rules such as syntax, semantics, and style.
  • Clock domain crossing verification (CDC check): similar to linting, but these checks/tools specialize in detecting and reporting potential issues like data loss, meta-stability due to use of multiple clock domains in the design.
  • Formal verification, also model checking: attempts to prove, by mathematical methods, that the system has certain desired properties, and that some undesired effects (such as deadlock) cannot occur.
  • Equivalence checking: algorithmic comparison between a chip's RTL-description and synthesized gate-netlist, to ensure functional equivalence at the logical level.
  • Static timing analysis: analysis of the timing of a circuit in an input-independent manner, hence finding a worst case over all possible inputs.
  • Layout extraction: starting with a proposed layout, compute the (approximate) electrical characteristics of every wire and device. Often used in conjunction with static timing analysis above to estimate the performance of the completed chip.
  • Electromagnetic field solvers, or just field solvers, solve Maxwell's equations directly for cases of interest in IC and PCB design. They are known for being slower but more accurate than the layout extraction above.
  • Physical verification, PV: checking if a design is physically manufacturable, and that the resulting chips will not have any function-preventing physical defects, and will meet original specifications.

Manufacturing preparation

Functional safety

  • Functional safety analysis, systematic computation of failure in time (FIT) rates and diagnostic coverage metrics for designs in order to meet the compliance requirements for the desired safety integrity levels.
  • Functional safety synthesis, add reliability enhancements to structured elements (modules, RAMs, ROMs, register files, FIFOs) to improve fault detection / fault tolerance. This includes (not limited to) addition of error detection and / or correction codes (Hamming), redundant logic for fault detection and fault tolerance (duplicate / triplicate) and protocol checks (interface parity, address alignment, beat count)
  • Functional safety verification, running of a fault campaign, including insertion of faults into the design and verification that the safety mechanism reacts in an appropriate manner for the faults that are deemed covered.
PCB layout and schematic for connector design

Companies

Current

Market capitalization and company name as of March 2023:

Defunct

Market capitalization and company name as of December 2011:

Acquisitions

Many EDA companies acquire small companies with software or other technology that can be adapted to their core business. Most of the market leaders are amalgamations of many smaller companies and this trend is helped by the tendency of software companies to design tools as accessories that fit naturally into a larger vendor's suite of programs on digital circuitry; many new tools incorporate analog design and mixed systems. This is happening due to a trend to place entire electronic systems on a single chip.

Technical conferences

Scanning probe microscopy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Scanning_probe_microscopy

Scanning probe microscopy (SPM) is a branch of microscopy that forms images of surfaces using a physical probe that scans the specimen. SPM was founded in 1981, with the invention of the scanning tunneling microscope, an instrument for imaging surfaces at the atomic level. The first successful scanning tunneling microscope experiment was done by Gerd Binnig and Heinrich Rohrer. The key to their success was using a feedback loop to regulate gap distance between the sample and the probe.

Many scanning probe microscopes can image several interactions simultaneously. The manner of using these interactions to obtain an image is generally called a mode.

The resolution varies somewhat from technique to technique, but some probe techniques reach a rather impressive atomic resolution. This is due largely because piezoelectric actuators can execute motions with a precision and accuracy at the atomic level or better on electronic command. This family of techniques can be called "piezoelectric techniques". The other common denominator is that the data are typically obtained as a two-dimensional grid of data points, visualized in false color as a computer image.

Established types

Image formation

To form images, scanning probe microscopes raster scan the tip over the surface. At discrete points in the raster scan a value is recorded (which value depends on the type of SPM and the mode of operation, see below). These recorded values are displayed as a heat map to produce the final STM images, usually using a black and white or an orange color scale.

Constant interaction mode

In constant interaction mode (often referred to as "in feedback"), a feedback loop is used to physically move the probe closer to or further from the surface (in the z axis) under study to maintain a constant interaction. This interaction depends on the type of SPM, for scanning tunneling microscopy the interaction is the tunnel current, for contact mode AFM or MFM it is the cantilever deflection, etc. The type of feedback loop used is usually a PI-loop, which is a PID-loop where the differential gain has been set to zero (as it amplifies noise). The z position of the tip (scanning plane is the xy-plane) is recorded periodically and displayed as a heat map. This is normally referred to as a topography image.

In this mode a second image, known as the ″error signal" or "error image" is also taken, which is a heat map of the interaction which was fed back on. Under perfect operation this image would be a blank at a constant value which was set on the feedback loop. Under real operation the image shows noise and often some indication of the surface structure. The user can use this image to edit the feedback gains to minimise features in the error signal.

If the gains are set incorrectly, many imaging artifacts are possible. If gains are too low features can appear smeared. If the gains are too high the feedback can become unstable and oscillate, producing striped features in the images which are not physical.

Constant height mode

In constant height mode the probe is not moved in the z-axis during the raster scan. Instead the value of the interaction under study is recorded (i.e. the tunnel current for STM, or the cantilever oscillation amplitude for amplitude modulated non-contact AFM). This recorded information is displayed as a heat map, and is usually referred to as a constant height image.

Constant height imaging is much more difficult than constant interaction imaging as the probe is much more likely to crash into the sample surface. Usually before performing constant height imaging one must image in constant interaction mode to check the surface has no large contaminants in the imaging region, to measure and correct for the sample tilt, and (especially for slow scans) to measure and correct for thermal drift of the sample. Piezoelectric creep can also be a problem, so the microscope often needs time to settle after large movements before constant height imaging can be performed.

Constant height imaging can be advantageous for eliminating the possibility of feedback artifacts.

Probe tips

The nature of an SPM probe tip depends entirely on the type of SPM being used. The combination of tip shape and topography of the sample make up a SPM image. However, certain characteristics are common to all, or at least most, SPMs.

Most importantly the probe must have a very sharp apex. The apex of the probe defines the resolution of the microscope, the sharper the probe the better the resolution. For atomic resolution imaging the probe must be terminated by a single atom.

For many cantilever based SPMs (e.g. AFM and MFM), the entire cantilever and integrated probe are fabricated by acid [etching], usually from silicon nitride. Conducting probes, needed for STM and SCM among others, are usually constructed from platinum/iridium wire for ambient operations, or tungsten for UHV operation. Other materials such as gold are sometimes used either for sample specific reasons or if the SPM is to be combined with other experiments such as TERS. Platinum/iridium (and other ambient) probes are normally cut using sharp wire cutters, the optimal method is to cut most of the way through the wire and then pull to snap the last of the wire, increasing the likelihood of a single atom termination. Tungsten wires are usually electrochemically etched, following this the oxide layer normally needs to be removed once the tip is in UHV conditions.

It is not uncommon for SPM probes (both purchased and "home-made") to not image with the desired resolution. This could be a tip which is too blunt or the probe may have more than one peak, resulting in a doubled or ghost image. For some probes, in situ modification of the tip apex is possible, this is usually done by either crashing the tip into the surface or by applying a large electric field. The latter is achieved by applying a bias voltage (of order 10V) between the tip and the sample, as this distance is usually 1-3 Angstroms, a very large field is generated.

The additional attachment of a quantum dot to the tip apex of a conductive probe enables surface potential imaging with high lateral resolution, scanning quantum dot microscopy.

Advantages

The resolution of the microscopes is not limited by diffraction, only by the size of the probe-sample interaction volume (i.e., point spread function), which can be as small as a few picometres. Hence the ability to measure small local differences in object height (like that of 135 picometre steps on <100> silicon) is unparalleled. Laterally the probe-sample interaction extends only across the tip atom or atoms involved in the interaction.

The interaction can be used to modify the sample to create small structures (Scanning probe lithography).

Unlike electron microscope methods, specimens do not require a partial vacuum but can be observed in air at standard temperature and pressure or while submerged in a liquid reaction vessel.

Disadvantages

The detailed shape of the scanning tip is sometimes difficult to determine. Its effect on the resulting data is particularly noticeable if the specimen varies greatly in height over lateral distances of 10 nm or less.

The scanning techniques are generally slower in acquiring images, due to the scanning process. As a result, efforts are being made to greatly improve the scanning rate. Like all scanning techniques, the embedding of spatial information into a time sequence opens the door to uncertainties in metrology, say of lateral spacings and angles, which arise due to time-domain effects like specimen drift, feedback loop oscillation, and mechanical vibration.

The maximum image size is generally smaller.

Scanning probe microscopy is often not useful for examining buried solid-solid or liquid-liquid interfaces.

Scanning photo current microscopy (SPCM)

SPCM can be considered as a member of the Scanning Probe Microscopy (SPM) family. The difference between other SPM techniques and SPCM is, it exploits a focused laser beam as the local excitation source instead of a probe tip.

Characterization and analysis of spatially resolved optical behavior of materials is very important in opto-electronic industry. Simply this involves studying how the properties of a material vary across its surface or bulk structure. Techniques that enable spatially resolved optoelectronic measurements provide valuable insights for the enhancement of optical performance. Scanning electron microscopy (SPCM) has emerged as a powerful technique which can investigate spatially resolved optoelectronic properties in semiconductor nano structures.

Principle

Laser scan of the scanning photocurrent microscope

In SPCM, a focused laser beam is used to excite the semiconducting material producing excitons (electro-hole pairs). These excitons undergo different mechanisms and if they can reach the nearby electrodes before the recombination takes place a photocurrent is generated. This photocurrent is position dependent as it, raster scans the device.

SPCM analysis

Using the position dependent photocurrent map, important photocurrent dynamics can be analyzed.

SPCM provides information such as characteristic length such as minority diffusion length, recombination dynamics, doping concentration, internal electric field  etc.

Visualization and analysis software

In all instances and contrary to optical microscopes, rendering software is necessary to produce images. Such software is produced and embedded by instrument manufacturers but also available as an accessory from specialized work groups or companies. The main packages used are freeware: Gwyddion, WSxM (developed by Nanotec) and commercial: SPIP (developed by Image Metrology), FemtoScan Online (developed by Advanced Technologies Center), MountainsMap SPM (developed by Digital Surf), TopoStitch (developed by Image Metrology).

Kelvin probe force microscope

From Wikipedia, the free encyclopedia
In Kelvin probe force microscopy, a conducting cantilever is scanned over a surface at a constant height in order to map the work function of the surface.
A typical scanning Kelvin probe (SKP) instrument. On the left is the control unit with lock-in amplifier and backing potential controller. On the right is the x, y, z scanning axis with vibrator, electrometer and probe mounted.

Kelvin probe force microscopy (KPFM), also known as surface potential microscopy, is a noncontact variant of atomic force microscopy (AFM). By raster scanning in the x,y plane the work function of the sample can be locally mapped for correlation with sample features. When there is little or no magnification, this approach can be described as using a scanning Kelvin probe (SKP). These techniques are predominantly used to measure corrosion and coatings.

With KPFM, the work function of surfaces can be observed at atomic or molecular scales. The work function relates to many surface phenomena, including catalytic activity, reconstruction of surfaces, doping and band-bending of semiconductors, charge trapping in dielectrics and corrosion. The map of the work function produced by KPFM gives information about the composition and electronic state of the local structures on the surface of a solid.

History

The SKP technique is based on parallel plate capacitor experiments performed by Lord Kelvin in 1898. In the 1930s William Zisman built upon Lord Kelvin's experiments to develop a technique to measure contact potential differences of dissimilar metals.

Working principle

Diagram of Fermi level changes during scanning Kelvin probe
The changes to the Fermi levels of the scanning Kelvin probe (SKP) sample and probe during measurement are shown. On the electrical connection of the probe and sample their Fermi levels equilibrate, and a charge develops at the probe and sample. A backing potential is applied to null this charge, returning the sample Fermi level to its original position.

In SKP the probe and sample are held parallel to each other and electrically connected to form a parallel plate capacitor. The probe is selected to be of a different material to the sample, therefore each component initially has a distinct Fermi level. When electrical connection is made between the probe and the sample electron flow can occur between the probe and the sample in the direction of the higher to the lower Fermi level. This electron flow causes the equilibration of the probe and sample Fermi levels. Furthermore, a surface charge develops on the probe and the sample, with a related potential difference known as the contact potential (Vc). In SKP the probe is vibrated along a perpendicular to the plane of the sample. This vibration causes a change in probe to sample distance, which in turn results in the flow of current, taking the form of an ac sine wave. The resulting ac sine wave is demodulated to a dc signal through the use of a lock-in amplifier. Typically the user must select the correct reference phase value used by the lock-in amplifier. Once the dc potential has been determined, an external potential, known as the backing potential (Vb) can be applied to null the charge between the probe and the sample. When the charge is nullified, the Fermi level of the sample returns to its original position. This means that Vb is equal to -Vc, which is the work function difference between the SKP probe and the sample measured.

Illustration of scanning Kelvin probe
Simplified illustration of the scanning Kelvin probe (SKP) technique. The probe is shown to vibrate in z, perpendicular to the sample plane. The probe and sample form a parallel plate capacitor as shown.
 
Block diagram of scanning Kelvin probe
Block diagram of a scanning Kelvin probe (SKP) instrument showing computer, control unit, scan axes, vibrator, probe, and sample

The cantilever in the AFM is a reference electrode that forms a capacitor with the surface, over which it is scanned laterally at a constant separation. The cantilever is not piezoelectrically driven at its mechanical resonance frequency ω0 as in normal AFM although an alternating current (AC) voltage is applied at this frequency.

When there is a direct-current (DC) potential difference between the tip and the surface, the AC+DC voltage offset will cause the cantilever to vibrate. The origin of the force can be understood by considering that the energy of the capacitor formed by the cantilever and the surface is

plus terms at DC. Only the cross-term proportional to the VDC·VAC product is at the resonance frequency ω0. The resulting vibration of the cantilever is detected using usual scanned-probe microscopy methods (typically involving a diode laser and a four-quadrant detector). A null circuit is used to drive the DC potential of the tip to a value which minimizes the vibration. A map of this nulling DC potential versus the lateral position coordinate therefore produces an image of the work function of the surface.

A related technique, electrostatic force microscopy (EFM), directly measures the force produced on a charged tip by the electric field emanating from the surface. EFM operates much like magnetic force microscopy in that the frequency shift or amplitude change of the cantilever oscillation is used to detect the electric field. However, EFM is much more sensitive to topographic artifacts than KPFM. Both EFM and KPFM require the use of conductive cantilevers, typically metal-coated silicon or silicon nitride. Another AFM-based technique for the imaging of electrostatic surface potentials, scanning quantum dot microscopy, quantifies surface potentials based on their ability to gate a tip-attached quantum dot.

Factors affecting SKP measurements

The quality of an SKP measurement is affected by a number of factors. This includes the diameter of the SKP probe, the probe to sample distance, and the material of the SKP probe. The probe diameter is important in the SKP measurement because it affects the overall resolution of the measurement, with smaller probes leading to improved resolution. On the other hand, reducing the size of the probe causes an increase in fringing effects which reduces the sensitivity of the measurement by increasing the measurement of stray capacitances. The material used in the construction of the SKP probe is important to the quality of the SKP measurement. This occurs for a number of reasons. Different materials have different work function values which will affect the contact potential measured. Different materials have different sensitivity to humidity changes. The material can also affect the resulting lateral resolution of the SKP measurement. In commercial probes tungsten is used, though probes of platinum, copper, gold, and NiCr has been used. The probe to sample distance affects the final SKP measurement, with smaller probe to sample distances improving the lateral resolution  and the signal-to-noise ratio of the measurement. Furthermore, reducing the SKP probe to sample distance increases the intensity of the measurement, where the intensity of the measurement is proportional to 1/d2, where d is the probe to sample distance. The effects of changing probe to sample distance on the measurement can be counteracted by using SKP in constant distance mode.

Work function

The Kelvin probe force microscope or Kelvin force microscope (KFM) is based on an AFM set-up and the determination of the work function is based on the measurement of the electrostatic forces between the small AFM tip and the sample. The conducting tip and the sample are characterized by (in general) different work functions, which represent the difference between the Fermi level and the vacuum level for each material. If both elements were brought in contact, a net electric current would flow between them until the Fermi levels were aligned. The difference between the work functions is called the contact potential difference and is denoted generally with VCPD. An electrostatic force exists between tip and sample, because of the electric field between them. For the measurement a voltage is applied between tip and sample, consisting of a DC-bias VDC and an AC-voltage VAC sin(ωt) of frequency ω.

Tuning the AC-frequency to the resonant frequency of the AFM cantilever results in an improved sensitivity. The electrostatic force in a capacitor may be found by differentiating the energy function with respect to the separation of the elements and can be written as

where C is the capacitance, z is the separation, and V is the voltage, each between tip and surface. Substituting the previous formula for voltage (V) shows that the electrostatic force can be split up into three contributions, as the total electrostatic force F acting on the tip then has spectral components at the frequencies ω and .

The DC component, FDC, contributes to the topographical signal, the term Fω at the characteristic frequency ω is used to measure the contact potential and the contribution F can be used for capacitance microscopy.

Contact potential measurements

For contact potential measurements a lock-in amplifier is used to detect the cantilever oscillation at ω. During the scan VDC will be adjusted so that the electrostatic forces between the tip and the sample become zero and thus the response at the frequency ω becomes zero. Since the electrostatic force at ω depends on VDC − VCPD, the value of VDC that minimizes the ω-term corresponds to the contact potential. Absolute values of the sample work function can be obtained if the tip is first calibrated against a reference sample of known work function. Apart from this, one can use the normal topographic scan methods at the resonance frequency ω independently of the above. Thus, in one scan, the topography and the contact potential of the sample are determined simultaneously. This can be done in (at least) two different ways: 1) The topography is captured in AC mode which means that the cantilever is driven by a piezo at its resonant frequency. Simultaneously the AC voltage for the KPFM measurement is applied at a frequency slightly lower than the resonant frequency of the cantilever. In this measurement mode the topography and the contact potential difference are captured at the same time and this mode is often called single-pass. 2) One line of the topography is captured either in contact or AC mode and is stored internally. Then, this line is scanned again, while the cantilever remains on a defined distance to the sample without a mechanically driven oscillation but the AC voltage of the KPFM measurement is applied and the contact potential is captured as explained above. It is important to note that the cantilever tip must not be too close to the sample in order to allow good oscillation with applied AC voltage. Therefore, KPFM can be performed simultaneously during AC topography measurements but not during contact topography measurements.

Applications

The Volta potential measured by SKP is directly proportional to the corrosion potential of a material, as such SKP has found widespread use in the study of the fields of corrosion and coatings. In the field of coatings for example, a scratched region of a self-healing shape memory polymer coating containing a heat generating agent on aluminium alloys was measured by SKP. Initially after the scratch was made the Volta potential was noticeably higher and wider over the scratch than over the rest of the sample, implying this region is more likely to corrode. The Volta potential decreased over subsequent measurements, and eventually the peak over the scratch completely disappeared implying the coating has healed. Because SKP can be used to investigate coatings in a non-destructive way it has also been used to determine coating failure. In a study of polyurethane coatings, it was seen that the work function increases with increasing exposure to high temperature and humidity. This increase in work function is related to decomposition of the coating likely from hydrolysis of bonds within the coating.

Using SKP the corrosion of industrially important alloys has been measured. In particular with SKP it is possible to investigate the effects of environmental stimulus on corrosion. For example, the microbially induced corrosion of stainless steel and titanium has been examined. SKP is useful to study this sort of corrosion because it usually occurs locally, therefore global techniques are poorly suited. Surface potential changes related to increased localized corrosion were shown by SKP measurements. Furthermore, it was possible to compare the resulting corrosion from different microbial species. In another example SKP was used to investigate biomedical alloy materials, which can be corroded within the human body. In studies on Ti-15Mo under inflammatory conditions, SKP measurements showed a lower corrosion resistance at the bottom of a corrosion pit than at the oxide protected surface of the alloy. SKP has also been used to investigate the effects of atmospheric corrosion, for example to investigate copper alloys in marine environment. In this study Kelvin potentials became more positive, indicating a more positive corrosion potential, with increased exposure time, due to an increase in thickness of corrosion products. As a final example SKP was used to investigate stainless steel under simulated conditions of gas pipeline. These measurements showed an increase in difference in corrosion potential of cathodic and anodic regions with increased corrosion time, indicating a higher likelihood of corrosion. Furthermore, these SKP measurements provided information about local corrosion, not possible with other techniques.

SKP has been used to investigate the surface potential of materials used in solar cells, with the advantage that it is a non-contact, and therefore a non-destructive technique. It can be used to determine the electron affinity of different materials in turn allowing the energy level overlap of conduction bands of differing materials to be determined. The energy level overlap of these bands is related to the surface photovoltage response of a system.

As a non-contact, non-destructive technique SKP has been used to investigate latent fingerprints on materials of interest for forensic studies. When fingerprints are left on a metallic surface they leave behind salts which can cause the localized corrosion of the material of interest. This leads to a change in Volta potential of the sample, which is detectable by SKP. SKP is particularly useful for these analyses because it can detect this change in Volta potential even after heating, or coating by, for example, oils.

SKP has been used to analyze the corrosion mechanisms of schreibersite-containing meteorites. The aim of these studies has been to investigate the role in such meteorites in releasing species utilized in prebiotic chemistry.

In the field of biology SKP has been used to investigate the electric fields associated with wounding, and acupuncture points.

In the field of electronics, KPFM is used to investigate the charge trapping in High-k gate oxides/interfaces of electronic devices.

Digital electronics

From Wikipedia, the free encyclopedia
 
Digital electronics
A digital signal has two or more distinguishable waveforms, in this example, high voltage and low voltages, each of which can be mapped onto a digit.
 
An industrial digital controller

Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics which work primarily with analog signals. Despite the name, digital electronics designs includes important analog design considerations.

Digital electronic circuits are usually made from large assemblies of logic gates, often packaged in integrated circuits. Complex devices may have simple electronic representations of Boolean logic functions.

History

The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the brain-child of George Boole in the mid 19th century. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification of the Fleming valve in 1907 could be used as an AND gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the 1954 Nobel Prize in physics, for creating the first modern electronic AND gate in 1924.

Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed, with the term digital being proposed by George Stibitz in 1942. Originally they were the size of a large room, consuming as much power as several hundred modern PCs.

The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it was the world's first working programmable, fully automatic digital computer. Its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming.

At the same time that digital calculation replaced analog, purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents. John Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948.

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes. Their "transistorised computer", and the first in the world, was operational by 1953, and a second version was completed there in April 1955. From 1955 and onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space.

While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas concerning the integrated circuit (IC), then successfully demonstrated the first working integrated circuit on 12 September 1958. Kilby's chip was made of germanium. The following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The basis for Noyce's silicon IC was the planar process, developed in early 1959 by Jean Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method developed in 1957. This new technique, the integrated circuit, allowed for quick, low-cost fabrication of complex circuits by having a set of electronic circuits on one small plate ("chip") of semiconductor material, normally silicon.

The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET's advantages include high scalability, affordability, low power consumption, and high transistor density. Its rapid on–off electronic switching speed also makes it ideal for generating pulse trains, the basis for electronic digital signals, in contrast to BJTs which, more slowly, generate analog signals resembling sine waves. Along with MOS large-scale integration (LSI), these factors make the MOSFET an important switching device for digital circuits. The MOSFET revolutionized the electronics industry, and is the most common semiconductor device.

In the early days of integrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. The wide adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip. Following the wide adoption of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed, and good designs required thorough planning, giving rise to new design methods. The transistor count of devices and total production rose to unprecedented heights. The total amount of transistors produced until 2018 has been estimated to be 1.3×1022 (13 sextillion).

The wireless revolution (the introduction and proliferation of wireless networks) began in the 1990s and was enabled by the wide adoption of MOSFET-based RF power amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS). Wireless networks allowed for public digital transmission without the need for cables, leading to digital television, satellite and digital radio, GPS, wireless Internet and mobile phones through the 1990s–2000s.

Properties

An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused by noise. For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s.

In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easily scalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of the signal chain.

With computer-controlled digital systems, new functions can be added through software revision and no hardware changes are needed. Often this can be done outside of the factory by updating the product's software. This way, the product's design errors can be corrected even after the product is in a customer's hands.

Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use of redundancy permits the recovery of the original data provided too many errors do not occur.

In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit the use of digital systems. For example, battery-powered cellular phones often use a low-power analog front-end to amplify and tune the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can easily be reprogrammed to process the signals used in new cellular standards.

Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.

If a single piece of digital data is lost or misinterpreted, in some systems only a small error may result, while in other systems the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly as linear pulse-code modulation causes, at worst, a single audible click. But when using audio compression to save storage space and transmission time, a single bit error may cause a much larger disruption.

Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data.

Construction

A binary clock, hand-wired on breadboards

A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of Boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates.

Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices", though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software.

Integrated circuits consist of multiple transistors on one silicon chip, and are the least expensive way to make large number of interconnected logic gates. Integrated circuits are usually interconnected on a printed circuit board which is a board which holds electrical components, and connects them together with copper traces.

Design

Engineers use many methods to minimize logic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. These operations are typically performed within a computer-aided design system.

Embedded systems with microcontrollers and programmable logic controllers are often used to implement digital logic for complex systems that do not require optimal performance. These systems are usually programmed by software engineers or by electricians, using ladder logic.

Representation

A digital circuit's input-output relationship can be represented as a truth table. An equivalent high-level circuit uses logic gates, each represented by a different shape (standardized by IEEE/ANSI 91–1984). A low-level representation uses an equivalent circuit of electronic switches (usually transistors).

Most digital systems divide into combinational and sequential systems. The output of a combinational system depends only on the present inputs. However, a sequential system has some of its outputs fed back as inputs, so its output may depend on past inputs in addition to present inputs, to produce a sequence of operations. Simplified representations of their behavior called state machines facilitate design and test.

Sequential systems divide into two further subcategories. "Synchronous" sequential systems change state all at once when a clock signal changes state. "Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made using flip flops that store inputted voltages as a bit only when the clock changes.

Synchronous systems

A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is connected to the clock signal, and update together.

The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a state register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational logic.

Asynchronous systems

Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates.  Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters.

Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called "self-resynchronization"). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable—that is—real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components.

Register transfer systems

Example of a simple circuit with a toggling output. The inverter forms the combinational logic in this circuit, and the register holds the state.

Many digital systems are data flow machines. These are usually designed using synchronous register transfer logic and written with hardware description languages such as VHDL or Verilog.

In register transfer logic, binary numbers are stored in groups of flip flops called registers. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called a bus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses.

Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs.

Computer design

Intel 80486DX2 microprocessor

The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unit, memory and other parts of the computer, including the microsequencer itself. In this way, the complex task of designing the controls of a computer is reduced to the simpler task of programming a collection of much simpler logic machines.

Almost all computers are synchronous. However, asynchronous computers have also been built. One example is the ASPIDA DLX core. Another was offered by ARM Holdings. They do not, however, have any speed advantages because modern computer designs already run at the speed of their slowest component, usually memory. They do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise. They are used in some radio-sensitive mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode.

Computer architecture

Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way possible for a specific purpose. Computer architects have put a lot of work into reducing the cost and increasing the speed of computers in addition to boosting their immunity to programming errors. An increasingly common goal of computer architects is to reduce the power used in battery-powered computer systems, such as smartphones.

Design issues in digital circuits

Digital circuits are made from analog components. The design must assure that the analog nature of the components does not dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances.

Bad designs have intermittent problems such as glitches, vanishingly fast pulses that may trigger some logic but not others, runt pulses that do not reach valid threshold voltages.

Additionally, where clocked digital systems interface to analog systems or systems that are driven from a different clock, the digital system can be subject to metastability where a change to the input violates the setup time for a digital input latch.

Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity.

Automated design tools

Much of the effort of designing large logic machines has been automated through the application of electronic design automation (EDA).

Simple truth table-style descriptions of logic are often optimized with EDA that automatically produce reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is the Espresso heuristic logic minimizer. Optimizing large logic systems may be done using the Quine–McCluskey algorithm or binary decision diagrams. There are promising experiments with genetic algorithms and annealing optimizations.

To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinational logic of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and their associated output signals.

Often, real logic systems are designed as a series of sub-projects, which are combined using a tool flow. The tool flow is usually controlled with the help of a scripting language, a simplified computer language that can invoke the software design tools in the right order. Tool flows for large logic systems such as microprocessors can be thousands of commands long, and combine the work of hundreds of engineers. Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions on how to draw the transistors and wires on an integrated circuit or a printed circuit board.

Parts of tool flows are debugged by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs and highlight discrepancies between the simulated behavior and the expected behavior. Once the input data is believed to be correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors.

The functional verification data are usually called test vectors. The functional test vectors may be preserved and used in the factory to test whether newly constructed logic works correctly. However, functional test patterns do not discover all fabrication faults. Production tests are often designed by automatic test pattern generation software tools. These generate test vectors by examining the structure of the logic and systematically generating tests targeting particular potential faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section).

Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Software that are designed for manufacturability add interference patterns to the exposure masks to eliminate open-circuits, and enhance the masks' contrast.

Design for testability

There are several reasons for testing a logic circuit. When the circuit is first developed, it is necessary to verify that the design circuit meets the required functional, and timing specifications. When multiple copies of a correctly designed circuit are being manufactured, it is essential to test each copy to ensure that the manufacturing process has not introduced any flaws.

A large logic machine (say, with more than a hundred logical variables) can have an astronomical number of possible states. Obviously, factory testing every state of such a machine is unfeasible, for even if testing each state only took a microsecond, there are more possible states than there are microseconds since the universe began!

Large logic machines are almost always designed as assemblies of smaller logic machines. To save time, the smaller sub-machines are isolated by permanently installed design for test circuitry, and are tested independently. One common testing scheme provides a test mode that forces some part of the logic machine to enter a test cycle. The test cycle usually exercises large independent parts of the machine.

Boundary scan is a common test scheme that uses serial communication with external test equipment through one or more shift registers known as scan chains. Serial scans have only one or two wires to carry the data, and minimize the physical size and expense of the infrequently used test logic. After all the test data bits are in place, the design is reconfigured to be in normal mode and one or more clock pulses are applied, to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-flops or latches in the scan shift register(s). Finally, the result of the test is shifted out to the block boundary and compared against the predicted good machine result.

In a board-test environment, serial to parallel testing has been formalized as the JTAG standard.

Trade-offs

Cost

Since a digital system may use many logic gates, the overall cost of building a computer correlates strongly with the cost of a logic gate. In the 1930s, the earliest digital logic systems were constructed from telephone relays because these were inexpensive and relatively reliable.

The earliest integrated circuits were constructed to save weight and permit the Apollo Guidance Computer to control an inertial guidance system for a spacecraft. The first integrated circuit logic gates cost nearly US$50, which in 2023 would be equivalent to $515. Mass-produced gates on integrated circuits became the least-expensive method to construct digital logic.

With the rise of integrated circuits, reducing the absolute number of chips used represented another way to save costs. The goal of a designer is not just to make the simplest circuit, but to keep the component count down. Sometimes this results in more complicated designs with respect to the underlying digital logic but nevertheless reduces the number of components, board size, and even power consumption.

Reliability

Another major motive for reducing component count on printed circuit boards is to reduce the manufacturing defect rate due to failed soldered connections and increase reliability. Defect and failure rates tend to increase along with the total number of component pins.

The failure of a single logic gate may cause a digital machine to fail. Where additional reliability is required, redundant logic can be provided. Redundancy adds cost and power consumption over a non-redundant system.

The reliability of a logic gate can be described by its mean time between failure (MTBF). Digital machines first became useful when the MTBF for a switch increased above a few hundred hours. Even so, many of these machines had complex, well-rehearsed repair procedures, and would be nonfunctional for hours because a tube burned-out, or a moth got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs greater than 82 billion hours (8.2×1010 h). This level of reliability is required because integrated circuits have so many logic gates.

Fan-out

Fan-out describes how many logic inputs can be controlled by a single logic output without exceeding the electrical current ratings of the gate outputs. The minimum practical fan-out is about five. Modern electronic logic gates using CMOS transistors for switches have higher fan-outs.

Speed

The switching speed describes how long it takes a logic output to change from true to false or vice versa. Faster logic can accomplish more operations in less time. Modern electronic digital logic routinely switches at GHz, and some laboratory systems switch at more than THz.

Logic families

Digital design started with relay logic which is slow. Occasionally a mechanical failure would occur. Fan-outs were typically about 10, limited by the resistance of the coils and arcing on the contacts from high voltages.

Later, vacuum tubes were used. These were very fast, but generated heat, and were unreliable because the filaments would burn out. Fan-outs were typically 5 to 7, limited by the heating from the tubes' current. In the 1950s, special computer tubes were developed with filaments that omitted volatile elements like silicon. These ran for hundreds of thousands of hours.

The first semiconductor logic family was resistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-out of 3. Diode–transistor logic improved the fan-out up to about 7, and reduced the power. Some DTL designs used two power-supplies with alternating layers of NPN and PNP transistors to increase the fan-out.

Transistor–transistor logic (TTL) was a great improvement over these. In early devices, fan-out improved to 10, and later variations reliably achieved 20. TTL was also fast, with some variations achieving switching times as low as 20 ns. TTL is still used in some designs.

Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers, such as the Illiac IV, made up of many medium-scale components.

By far, the most common digital integrated circuits built today use CMOS logic, which is fast, offers high circuit density and low power per gate. This is used even in large, fast computers, such as the IBM System z.

Recent developments

In 2009, researchers discovered that memristors can implement a Boolean state storage and provides a complete logic family with very small amounts of space and power, using familiar CMOS semiconductor processes.

The discovery of superconductivity has enabled the development of rapid single flux quantum (RSFQ) circuit technology, which uses Josephson junctions instead of transistors. Most recently, attempts are being made to construct purely optical computing systems capable of processing digital information using nonlinear optical elements.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...