Search This Blog

Tuesday, October 6, 2020

Digital electronics

From Wikipedia, the free encyclopedia
 
Digital electronics
 
A digital signal has two or more distinguishable waveforms, in this example, high voltage and low voltages, each of which can be mapped onto a digit.
 
An industrial digital controller

Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics and analog signals.

Digital electronic circuits are usually made from large assemblies of logic gates, often packaged in integrated circuits. Complex devices may have simple electronic representations of Boolean logic functions.

History

The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the brain-child of George Boole in the mid 19th century. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as an AND gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924.

Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).

The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it was the world's first working programmable, fully automatic digital computer. Its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming.

At the same time that digital calculation replaced analog, purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents. John Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948.

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes. Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space.

While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas concerning the integrated circuit (IC), then successfully demonstrated the first working integrated on 12 September 1958. Kilby's chip was made of germanium. The following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The basis for Noyce's silicon IC was the planar process, developed in early 1959 by Jean Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method developed in 1957. This new technique, the integrated circuit, allowed for quick, low-cost fabrication of complex circuits by having a set of electronic circuits on one small plate ("chip") of semiconductor material, normally silicon.

Digital revolution and digital age

The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET's advantages include high scalability, affordability, low power consumption, and high transistor density. Its rapid on–off electronic switching speed also makes it ideal for generating pulse trains, the basis for electronic digital signals, in contrast to BJTs which more slowly generate analog signals resembling sine waves. Along with MOS large-scale integration (LSI), these factors make the MOSFET an important switching device for digital circuits. The MOSFET revolutionized the electronics industry, and is the most common semiconductor device. MOSFETs are the fundamental building blocks of digital electronics, during the Digital Revolution of the late 20th to early 21st centuries. This paved the way for the Digital Age of the early 21st century.

In the early days of integrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. The wide adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip. Following the wide adoption of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed, and good designs required thorough planning, giving rise to new design methods. As of 2013, billions of MOSFETs are manufactured every day.

The wireless revolution, the introduction and proliferation of wireless networks, began in the 1990s and was enabled by the wide adoption of MOSFET-based RF power amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS). Wireless networks allowed for public digital transmission without the need for cables, leading to digital television (digital TV), GPS, satellite radio, wireless Internet and mobile phones through the 1990s–2000s.

Discrete cosine transform (DCT) coding, a data compression technique first proposed by Nasir Ahmed in 1972, enabled practical digital media transmission, with image compression formats such as JPEG (1992), video coding formats such as H.26x (1988 onwards) and MPEG (1993 onwards), audio coding standards such as Dolby Digital (1991) and MP3 (1994), and digital TV standards such as video-on-demand (VOD) and high-definition television (HDTV). Internet video was popularized by YouTube, an online video platform founded by Chad Hurley, Jawed Karim and Steve Chen in 2005, which enabled the video streaming of MPEG-4 AVC (H.264) user-generated content from anywhere on the World Wide Web.

Properties

An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused by noise. For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s.

In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easily scalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of the signal chain.

With computer-controlled digital systems, new functions to be added through software revision and no hardware changes. Often this can be done outside of the factory by updating the product's software. So, the product's design errors can be corrected after the product is in a customer's hands.

Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use of redundancy permits the recovery of the original data provided too many errors do not occur.

In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit use of digital systems. For example, battery-powered cellular telephones often use a low-power analog front-end to amplify and tune in the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can be easily reprogrammed to process the signals used in new cellular standards.

Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.

In some systems, if a single piece of digital data is lost or misinterpreted, the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly as linear pulse code modulation causes, at worst, a single click. Instead, many people use audio compression to save storage space and download time, even though a single bit error may cause a larger disruption.

Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data.

Construction

A binary clock, hand-wired on breadboards

A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates.

Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices", though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software.

Integrated circuits consist of multiple transistors on one silicon chip, and are the least expensive way to make large number of interconnected logic gates. Integrated circuits are usually interconnected on a printed circuit board which is a board which holds electrical components, and connects them together with copper traces.

Design

Engineers use many methods to minimize logic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. These operations are typically performed within a computer-aided design system.

Embedded systems with microcontrollers and programmable logic controllers are often used to implement digital logic for complex systems that don't require optimal performance. These systems are usually programmed by software engineers or by electricians, using ladder logic.

Representation

Representations are crucial to an engineer's design of digital circuits. To choose representations, engineers consider types of digital systems.

The classical way to represent a digital circuit is with an equivalent set of logic gates. Each logic symbol is represented by a different shape. The actual set of shapes was introduced in 1984 under IEEE/ANSI standard 91-1984 and is now in common use by integrated circuit manufacturers. Another way is to construct an equivalent system of electronic switches (usually transistors). This can be represented as a truth table.

Most digital systems divide into combinational and sequential systems. A combinational system always presents the same output when given the same inputs. A sequential system is a combinational system with some of the outputs fed back as inputs. This makes the digital machine perform a sequence of operations. The simplest sequential system is probably a flip flop, a mechanism that represents a binary digit or "bit". Sequential systems are often designed as state machines. In this way, engineers can design a system's gross behavior, and even test it in a simulation, without considering all the details of the logic functions.

Sequential systems divide into two further subcategories. "Synchronous" sequential systems change state all at once when a clock signal changes state. "Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made of well-characterized asynchronous circuits such as flip-flops, that change only when the clock changes, and which have carefully designed timing margins.

For logic simulation, digital circuit representations have digital file formats that can be processed by computer programs.

Synchronous systems

A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is connected to the clock signal, and update together.

The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a state register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational logic.

Asynchronous systems

Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates. Building an asynchronous system using faster parts makes the circuit faster.

Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters.

Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called "self-resynchronization"). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable, that is, real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components.

Register transfer systems

Example of a simple circuit with a toggling output. The inverter forms the combinational logic in this circuit, and the register holds the state.

Many digital systems are data flow machines. These are usually designed using synchronous register transfer logic, using hardware description languages such as VHDL or Verilog.

In register transfer logic, binary numbers are stored in groups of flip flops called registers. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called a bus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses.

Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs.

Computer design

Intel 80486DX2 microprocessor

The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry or "word" of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unit, memory and other parts of the computer, including the microsequencer itself. A "specialized computer" is usually a conventional computer with special-purpose control logic or microprogram.

In this way, the complex task of designing the controls of a computer is reduced to a simpler task of programming a collection of much simpler logic machines.

Almost all computers are synchronous. However, numerous true asynchronous computers have also been built. One example is the Aspida DLX core. Another was offered by ARM Holdings. Speed advantages have not materialized, because modern computer designs already run at the speed of their slowest component, usually memory. These do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise, so they are used in some mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode.

Computer architecture

Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way for some purpose. Computer architects have applied large amounts of ingenuity to computer design to reduce the cost and increase the speed and immunity to programming errors of computers. An increasingly common goal is to reduce the power used in a battery-powered computer system, such as a cell-phone. Many computer architects serve an extended apprenticeship as microprogrammers.

Design issues in digital circuits

Digital circuits are made from analog components. The design must assure that the analog nature of the components doesn't dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances, and filter power connections.

Bad designs have intermittent problems such as "glitches", vanishingly fast pulses that may trigger some logic but not others, "runt pulses" that do not reach valid "threshold" voltages, or unexpected ("undecoded") combinations of logic states.

Additionally, where clocked digital systems interface to analog systems or systems that are driven from a different clock, the digital system can be subject to metastability where a change to the input violates the set-up time for a digital input latch. This situation will self-resolve, but will take a random time, and while it persists can result in invalid signals being propagated within the digital system for a short time.

Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity. On the other hand, in the high-precision domain (for example, where 14 or more bits of precision are needed), analog circuits require much more power and area than digital equivalents.

Automated design tools

To save costly engineering effort, much of the effort of designing large logic machines has been automated. The computer programs are called "electronic design automation tools" or just "EDA."

Simple truth table-style descriptions of logic are often optimized with EDA that automatically produces reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is the Espresso heuristic logic minimizer.

Most practical algorithms for optimizing large logic systems use algebraic manipulations or binary decision diagrams, and there are promising experiments with genetic algorithms and annealing optimizations.

To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinational logic of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and the belonging output signals.

It is common for the function tables of such computer-generated state-machines to be optimized with logic-minimization software such as Minilog.

Often, real logic systems are designed as a series of sub-projects, which are combined using a "tool flow." The tool flow is usually a "script," a simplified computer language that can invoke the software design tools in the right order.

Tool flows for large logic systems such as microprocessors can be thousands of commands long, and combine the work of hundreds of engineers.

Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions to draw the transistors and wires on an integrated circuit or a printed circuit board.

Parts of tool flows are "debugged" by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs, and highlight discrepancies between the simulated behavior and the expected behavior.

Once the input data is believed correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, and then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors.

The functional verification data are usually called "test vectors". The functional test vectors may be preserved and used in the factory to test that newly constructed logic works correctly. However, functional test patterns don't discover common fabrication faults. Production tests are often designed by software tools called "test pattern generators". These generate test vectors by examining the structure of the logic and systematically generating tests for particular faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section).

Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Manufacturability software adds interference patterns to the exposure masks to eliminate open-circuits, and enhance the masks' contrast.

Design for testability

There are several reasons for testing a logic circuit. When the circuit is first developed, it is necessary to verify that the design circuit meets the required functional and timing specifications. When multiple copies of a correctly designed circuit are being manufactured, it is essential to test each copy to ensure that the manufacturing process has not introduced any flaws.

A large logic machine (say, with more than a hundred logical variables) can have an astronomical number of possible states. Obviously, in the factory, testing every state is impractical if testing each state takes a microsecond, and there are more states than the number of microseconds since the universe began. This ridiculous-sounding case is typical.

Large logic machines are almost always designed as assemblies of smaller logic machines. To save time, the smaller sub-machines are isolated by permanently installed "design for test" circuitry, and are tested independently.

One common test scheme known as "scan design" moves test bits serially (one after another) from external test equipment through one or more serial shift registers known as "scan chains". Serial scans have only one or two wires to carry the data, and minimize the physical size and expense of the infrequently used test logic.

After all the test data bits are in place, the design is reconfigured to be in "normal mode" and one or more clock pulses are applied, to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-flops and/or latches in the scan shift register(s). Finally, the result of the test is shifted out to the block boundary and compared against the predicted "good machine" result.

In a board-test environment, serial to parallel testing has been formalized with a standard called "JTAG" (named after the "Joint Test Action Group" that made it).

Another common testing scheme provides a test mode that forces some part of the logic machine to enter a "test cycle." The test cycle usually exercises large independent parts of the machine.

Trade-offs

Several numbers determine the practicality of a system of digital logic: cost, reliability, fanout and speed. Engineers explored numerous electronic devices to get a favourable combination of these personalities.

Cost

The cost of a logic gate is crucial, primarily because very many gates are needed to build a computer or other advanced digital system and because the more gates can be used, the more able and/or respondent the machine can become. Since the bulk of a digital computer is simply an interconnected network of logic gates, the overall cost of building a computer correlates strongly with the price per logic gate. In the 1930s, the earliest digital logic systems were constructed from telephone relays because these were inexpensive and relatively reliable. After that, electrical engineers always used the cheapest available electronic switches that could still fulfill the requirements.

The earliest integrated circuits were a happy accident. They were constructed not to save money, but to save weight, and permit the Apollo Guidance Computer to control an inertial guidance system for a spacecraft. The first integrated circuit logic gates cost nearly $50 (in 1960 dollars, when an engineer earned $10,000/year). Much to the surprise of many involved, by the time the circuits were mass-produced, they had become the least-expensive method of constructing digital logic. Improvements in this technology have driven all subsequent improvements in cost.

With the rise of integrated circuits, reducing the absolute number of chips used represented another way to save costs. The goal of a designer is not just to make the simplest circuit, but to keep the component count down. Sometimes this results in more complicated designs with respect to the underlying digital logic but nevertheless reduces the number of components, board size, and even power consumption. A major motive for reducing component count on printed circuit boards is to reduce the manufacturing defect rate and increase reliability, as every soldered connection is a potentially bad one, so the defect and failure rates tend to increase along with the total number of component pins.

For example, in some logic families, NAND gates are the simplest digital gate to build. All other logical operations can be implemented by NAND gates. If a circuit already required a single NAND gate, and a single chip normally carried four NAND gates, then the remaining gates could be used to implement other logical operations like logical and. This could eliminate the need for a separate chip containing those different types of gates.

Reliability

The "reliability" of a logic gate describes its mean time between failure (MTBF). Digital machines often have millions of logic gates. Also, most digital machines are "optimized" to reduce their cost. The result is that often, the failure of a single logic gate will cause a digital machine to stop working. It is possible to design machines to be more reliable by using redundant logic which will not malfunction as a result of the failure of any single gate (or even any two, three, or four gates), but this necessarily entails using more components, which raises the financial cost and also usually increases the weight of the machine and may increase the power it consumes.

Digital machines first became useful when the MTBF for a switch got above a few hundred hours. Even so, many of these machines had complex, well-rehearsed repair procedures, and would be nonfunctional for hours because a tube burned-out, or a moth got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs greater than 82 billion hours (8.2 · 1010 hours), and need them because they have so many logic gates.

Fanout

Fanout describes how many logic inputs can be controlled by a single logic output without exceeding the electrical current ratings of the gate outputs. The minimum practical fanout is about five. Modern electronic logic gates using CMOS transistors for switches have fanouts near fifty, and can sometimes go much higher.

Speed

The "switching speed" describes how many times per second an inverter (an electronic representation of a "logical not" function) can change from true to false and back. Faster logic can accomplish more operations in less time. Digital logic first became useful when switching speeds got above 50 Hz, because that was faster than a team of humans operating mechanical calculators. Modern electronic digital logic routinely switches at 5 GHz (5 · 109 Hz), and some laboratory systems switch at more than 1 THz (1 · 1012 Hz).

Logic families

Design started with relays. Relay logic was relatively inexpensive and reliable, but slow. Occasionally a mechanical failure would occur. Fanouts were typically about 10, limited by the resistance of the coils and arcing on the contacts from high voltages.

Later, vacuum tubes were used. These were very fast, but generated heat, and were unreliable because the filaments would burn out. Fanouts were typically 5...7, limited by the heating from the tubes' current. In the 1950s, special "computer tubes" were developed with filaments that omitted volatile elements like silicon. These ran for hundreds of thousands of hours.

The first semiconductor logic family was resistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-in of 3. Diode–transistor logic improved the fanout up to about 7, and reduced the power. Some DTL designs used two power-supplies with alternating layers of NPN and PNP transistors to increase the fanout.

Transistor–transistor logic (TTL) was a great improvement over these. In early devices, fanout improved to 10, and later variations reliably achieved 20. TTL was also fast, with some variations achieving switching times as low as 20 ns. TTL is still used in some designs.

Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers made up of many medium-scale components (such as the Illiac IV).

By far, the most common digital integrated circuits built today use CMOS logic, which is fast, offers high circuit density and low-power per gate. This is used even in large, fast computers, such as the IBM System z.

Recent developments

In 2009, researchers discovered that memristors can implement a boolean state storage (similar to a flip flop, implication and logical inversion), providing a complete logic family with very small amounts of space and power, using familiar CMOS semiconductor processes.

The discovery of superconductivity has enabled the development of rapid single flux quantum (RSFQ) circuit technology, which uses Josephson junctions instead of transistors. Most recently, attempts are being made to construct purely optical computing systems capable of processing digital information using nonlinear optical elements.

Electronics

From Wikipedia, the free encyclopedia
 
Surface-mount electronic components

Electronics comprises the physics, engineering, technology and applications that deal with the emission, flow and control of electrons in vacuum and matter. It uses active devices to control electron flow by amplification and rectification, which distinguishes it from classical electrical engineering which uses passive effects such as resistance, capacitance and inductance to control current flow.

Electronics has had a major effect on the development of modern society. The identification of the electron in 1897, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. This distinction started around 1906 with the invention by Lee De Forest of the triode, which made electrical amplification of weak radio signals and audio signals possible with a non-mechanical device. Until 1950, this field was called "radio technology" because its principal application was the design and theory of radio transmitters, receivers, and vacuum tubes.

The term "solid-state electronics" emerged after the first working transistor was invented by William Shockley, Walter Houser Brattain and John Bardeen at Bell Labs in 1947. The MOSFET (MOS transistor) was later invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses, revolutionizing the electronics industry, and playing a central role in the microelectronics revolution and Digital Revolution. The MOSFET has since become the basic element in most modern electronic equipment, and is the most widely used electronic device in the world.

Electronics is widely used in information processing, telecommunication, and signal processing. The ability of electronic devices to act as switches makes digital information-processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed electronic components into a regular working system, called an electronic system; examples are computers or control systems. An electronic system may be a component of another engineered system or a standalone device. As of 2019 most electronic devices use semiconductor components to perform electron control. Commonly, electronic devices contain circuitry consisting of active semiconductors supplemented with passive elements; such a circuit is described as an electronic circuit. Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, integrated circuits, optoelectronics, and sensors, associated passive electrical components, and interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible.

The study of semiconductor devices and related technology is considered a branch of solid-state physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering. This article focuses on engineering aspects of electronics.

Branches of electronics

Electronics has branches as follows:

  1. Digital electronics
  2. Analogue electronics
  3. Microelectronics
  4. Circuit design
  5. Integrated circuits
  6. Power electronics
  7. Optoelectronics
  8. Semiconductor devices
  9. Embedded systems
  10. Audio electronics
  11. Telecommunications
  12. Nanoelectronics
  13. Bioelectronics

Electronic devices and components

One of the earliest Audion radio receivers, constructed by De Forest in 1914.
 
Electronics Technician performing a voltage check on a power circuit card in the air navigation equipment room aboard the aircraft carrier USS Abraham Lincoln (CVN-72).

An electronic component is any physical entity in an electronic system used to affect the electrons or their associated fields in a manner consistent with the intended function of the electronic system. Components are generally intended to be connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Components may be packaged singly, or in more complex groups as integrated circuits. Some common electronic components are capacitors, inductors, resistors, diodes, transistors, etc. Components are often categorized as active (e.g. transistors and thyristors) or passive (e.g. resistors, diodes, inductors and capacitors).

History of electronic components

Vacuum tubes (Thermionic valves) were among the earliest electronic components. They were almost solely responsible for the electronics revolution of the first half of the twentieth century. They allowed for vastly more complicated systems and gave us radio, television, phonographs, radar, long-distance telephony and much more. They played a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since that time, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices.

The first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic and peripherals. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.

The MOSFET (MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment, and has been central to the electronics revolution, the microelectronics revolution, and the Digital Revolution. The MOSFET has thus been credited as the birth of modern electronics, and possibly the most important invention in electronics.

Types of circuits

Circuits and components can be divided into two groups: analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types.

Analog circuits

Hitachi J100 adjustable frequency drive chassis

Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage or current as opposed to discrete levels as in digital circuits.

The number of different analog circuits so far devised is huge, especially because a 'circuit' can be defined as anything from a single component, to systems containing thousands of components.

Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.

One rarely finds modern circuits that are entirely analog. These days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called "mixed signal" rather than analog or digital.

Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output. In fact, many digital circuits are actually implemented as variations of analog circuits similar to this example – after all, all aspects of the real physical world are essentially analog, so digital effects are only realized by constraining analog behavior.

Digital circuits

Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra, and are the basis of all digital computers. To most engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context of digital circuits. Most digital circuits use a binary system with two voltage levels labeled "0" and "1". Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as he sees fit to facilitate his design. The definition of the levels as "0" or "1" is arbitrary.

Ternary (with three states) logic has been studied, and some prototype computers made.

Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital signal processors are another example.

Building blocks:

Highly integrated devices:

Heat dissipation and thermal management

Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy.

Noise

Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.

Electronics theory

Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis.

Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator.

Also important to electronics is the study and understanding of electromagnetic field theory.

Electronics lab

Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice.

Computer aided design (CAD)

Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others.

Packaging methods

Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to the European Union, with its Restriction of Hazardous Substances Directive (RoHS) and Waste Electrical and Electronic Equipment Directive (WEEE), which went into force in July 2006.

Electronic systems design

Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user.

Mounting Options

Electrical components are generally mounted in the following ways:

Electronics industry

The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over $481 billion as of 2018. The largest industry sector is e-commerce, which generated over $29 trillion in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13 sextillion MOSFETs having been manufactured between 1960 and 2018.

Optical transistor

From Wikipedia, the free encyclopedia

An optical transistor, also known as an optical switch or a light valve, is a device that switches or amplifies optical signals. Light occurring on an optical transistor’s input changes the intensity of light emitted from the transistor’s output while output power is supplied by an additional optical source. Since the input signal intensity may be weaker than that of the source, an optical transistor amplifies the optical signal. The device is the optical analog of the electronic transistor that forms the basis of modern electronic devices. Optical transistors provide a means to control light using only light and has applications in optical computing and fiber-optic communication networks. Such technology has the potential to exceed the speed of electronics, while saving more power.

Since photons inherently do not interact with each other, an optical transistor must employ an operating medium to mediate interactions. This is done without converting optical to electronic signals as an intermediate step. Implementations using a variety of operating mediums have been proposed and experimentally demonstrated. However, their ability to compete with modern electronics is currently limited.

Applications

Optical transistors could be used to improve the performance of fiber-optic communication networks. Although fiber-optic cables are used to transfer data, tasks such as signal routing are done electronically. This requires optical-electronic-optical conversion, which form bottlenecks. In principle, all-optical digital signal processing and routing is achievable using optical transistors arranged into photonic integrated circuits . The same devices could be used to create new types of optical amplifiers to compensate for signal attenuation along transmission lines.

A more elaborate application of optical transistors is the development of an optical digital computer in which components process photons rather than electrons. Further, optical transistors that operate using single photons could form an integral part of quantum information processing where they can be used to selectively address individual units of quantum information, known as qubits.

Comparison with electronics

The most commonly argued case for optical logic is that optical transistor switching times can be much faster than in conventional electronic transistors. This is due to the fact that the speed of light in an optical medium is typically much faster than the drift velocity of electrons in semiconductors.

Optical transistors can be directly linked to fiber-optic cables whereas electronics requires coupling via photodetectors and LEDs or lasers. The more natural integration of all-optical signal processors with fiber-optics would reduce the complexity and delay in the routing and other processing of signals in optical communication networks.

It remains questionable whether optical processing can reduce the energy required to switch a single transistor to be less than that for electronic transistors. To realistically compete, transistors require a few tens of photons per operation. It is clear, however, that this is achievable in proposed single-photon transistors for quantum information processing.

Perhaps the most significant advantage of optical over electronic logic is reduced power consumption. This comes from the absence of capacitance in the connections between individual logic gates. In electronics, the transmission line needs to be charged to the signal voltage. The capacitance of a transmission line is proportional to its length and it exceeds the capacitance of the transistors in a logic gate when its length is equal to that of a single gate. The charging of transmission lines is one of the main energy losses in electronic logic. This loss is avoided in optical communication where only enough energy to switch an optical transistor at the receiving end must be transmitted down a line. This fact has played a major role in the uptake of fiber optics for long distance communication but is yet to be exploited at the microprocessor level.

Besides the potential advantages of higher speed, lower power consumption and high compatibility with optical communication systems, optical transistors must satisfy a set of benchmarks before they can compete with electronics. No single design has yet satisfied all these criteria whilst outperforming speed and power consumption of state of the art electronics.

The criteria include:

  • Fan-out - Transistor output must be in the correct form and of sufficient power to operate the inputs of at least two transistors. This implies that the input and output wavelengths, beam shapes and pulse shapes must be compatible.
  • Logic level restoration - The signal needs to be ‘cleaned’ by each transistor. Noise and degradations in signal quality must be removed so that they do not propagate through the system and accumulate to produce errors.
  • Logic level independent of loss - In optical communication, the signal intensity decreases over distance due to absorption of light in the fiber optic cable. Therefore, a simple intensity threshold cannot distinguish between on and off signals for arbitrary length interconnects. The system must encode zeros and ones at different frequencies, use differential signaling where the ratio or difference in two different powers carries the logic signal to avoid errors.

Implementations

Several schemes have been proposed to implement all-optical transistors. In many cases, a proof of concept has been experimentally demonstrated. Among the designs are those based on:

  • electromagnetically induced transparency
    • in an optical cavity or microresonator, where the transmission is controlled by a weaker flux of gate photons
    • in free space, i.e., without a resonator, by addressing strongly interacting Rydberg states
  • a system of indirect excitons (composed of bound pairs of electrons and holes in double quantum wells with a static dipole moment). Indirect excitons, which are created by light and decay to emit light, strongly interact due to their dipole alignment.
  • a system of microcavity polaritons (exciton-polaritons inside an optical microcavity) where, similar to exciton-based optical transistors, polaritons facilitate effective interactions between photons
  • photonic crystal cavities with an active Raman gain medium
  • cavity switch modulates cavity properties in time domain for quantum information applications 
  • nanowire-based cavities employing polaritonic interactions for optical switching
  • silicon microrings placed in the path of an optical signal. Gate photons heat the silicon microring causing a shift in the optical resonant frequency, leading to a change in transparency at a given frequency of the optical supply.
  • a dual-mirror optical cavity that holds around 20,000 cesium atoms trapped by means of optical tweezers and laser-cooled to a few microkelvin. The cesium ensemble did not interact with light and was thus transparent. The length of a round trip between the cavity mirrors equaled an integer multiple of the wavelength of the incident light source, allowing the cavity to transmit the source light. Photons from the gate light field entered the cavity from the side, where each photon interacted with an additional "control" light field, changing a single atom's state to be resonant with the cavity optical field, which changing the field's resonance wavelength and blocking transmission of the source field, thereby "switching" the "device". While the changed atom remains unidentified, quantum interference allows the gate photon to be retrieved from the cesium. A single gate photon could redirect a source field containing up to two photons before the retrieval of the gate photon was impeded, above the critical threshold for a positive gain.

 

Optical computing

From Wikipedia, the free encyclopedia

Optical or photonic computing uses photons produced by lasers or diodes for computation. For decades, photons have promised to allow a higher bandwidth than the electrons used in conventional computers (see optical fibers).

Most research projects focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical-electronic hybrid. However, optoelectronic devices lose 30% of their energy converting electronic energy into photons and back; this conversion also slows the transmission of messages. All-optical computers eliminate the need for optical-electrical-optical (OEO) conversions, thus lessening the need for electrical power.

Application-specific devices, such as synthetic aperture radar (SAR) and optical correlators, have been designed to use the principles of optical computing. Correlators can be used, for example, to detect and track objects, and to classify serial time-domain optical data.

Optical components for binary digital computer

The fundamental building block of modern electronic computers is the transistor. To replace electronic components with optical ones, an equivalent optical transistor is required. This is achieved using materials with a non-linear refractive index. In particular, materials exist where the intensity of incoming light affects the intensity of the light transmitted through the material in a similar manner to the current response of a bipolar transistor. Such an optical transistor can be used to create optical logic gates, which in turn are assembled into the higher level components of the computer's CPU. These will be nonlinear optical crystals used to manipulate light beams into controlling other light beams.

Like any computing system, an Optical computing system needs three things to function well:

  1. optical processor
  2. optical data transfer, e.g. Fiber optic cable
  3. optical storage, e.g. CD/DVD/Blu-ray, etc.

Substituting electrical components will need data format conversion from photons to electrons, which will make the system slower.

Controversy

There are disagreements between researchers about the future capabilities of optical computers; whether or not they may be able to compete with semiconductor-based electronic computers in terms of speed, power consumption, cost, and size is an open question. Critics note that real-world logic systems require "logic-level restoration, cascadability, fan-out and input–output isolation", all of which are currently provided by electronic transistors at low cost, low power, and high speed. For optical logic to be competitive beyond a few niche applications, major breakthroughs in non-linear optical device technology would be required, or perhaps a change in the nature of computing itself.

Misconceptions, challenges, and prospects

A significant challenge to optical computing is that computation is a nonlinear process in which multiple signals must interact. Light, which is an electromagnetic wave, can only interact with another electromagnetic wave in the presence of electrons in a material, and the strength of this interaction is much weaker for electromagnetic waves, such as light, than for the electronic signals in a conventional computer. This may result in the processing elements for an optical computer requiring more power and larger dimensions than those for a conventional electronic computer using transistors.

A further misconception is that since light can travel much faster than the drift velocity of electrons, and at frequencies measured in THz, optical transistors should be capable of extremely high frequencies. However, any electromagnetic wave must obey the transform limit, and therefore the rate at which an optical transistor can respond to a signal is still limited by its spectral bandwidth. However, in fiber optic communications, practical limits such as dispersion often constrain channels to bandwidths of 10s of GHz, only slightly better than many silicon transistors. Obtaining dramatically faster operation than electronic transistors would therefore require practical methods of transmitting ultrashort pulses down highly dispersive waveguides.

Photonic logic

Realization of a photonic controlled-NOT gate for use in quantum computing

Photonic logic is the use of photons (light) in logic gates (NOT, AND, OR, NAND, NOR, XOR, XNOR). Switching is obtained using nonlinear optical effects when two or more signals are combined.

Resonators are especially useful in photonic logic, since they allow a build-up of energy from constructive interference, thus enhancing optical nonlinear effects.

Other approaches that have been investigated include photonic logic at a molecular level, using photoluminescent chemicals. In a demonstration, Witlicki et al. performed logical operations using molecules and SERS.

Unconventional approaches

Time delays optical computing

The basic idea is to delay light (or any other signal) in order to perform useful computations. Of interest would be to solve NP-complete problems as those are difficult problems for the conventional computers.

There are 2 basic properties of light that are actually used in this approach:

  • The light can be delayed by passing it through an optical fiber of a certain length.
  • The light can be split into multiple (sub)rays. This property is also essential because we can evaluate multiple solutions in the same time.

When solving a problem with time-delays the following steps must be followed:

  • The first step is to create a graph-like structure made from optical cables and splitters. Each graph has a start node and a destination node.
  • The light enters through the start node and traverses the graph until it reaches the destination. It is delayed when passing through arcs and divided inside nodes.
  • The light is marked when passing through an arc or through an node so that we can easily identify that fact at the destination node.
  • At the destination node we will wait for a signal (fluctuation in the intensity of the signal) which arrives at a particular moment(s) in time. If there is no signal arriving at that moment, it means that we have no solution for our problem. Otherwise the problem has a solution. Fluctuations can be read with a photodetector and an oscilloscope.

The first problem attacked in this way was the Hamiltonian path problem.

The simplest one is the subset sum problem. An optical device solving an instance with 4 numbers {a1, a2, a3, a4} is depicted below:

Optical device for solving the Subset sum problem

The light will enter in Start node. It will be divided into 2 (sub)rays of smaller intensity. These 2 rays will arrive into the second node at moments a1 and 0. Each of them will be divided into 2 subrays which will arrive in the 3rd node at moments 0, a1, a2 and a1 + a2. These represents the all subsets of the set {a1, a2}. We expect fluctuations in the intensity of the signal at no more than 4 different moments. In the destination node we expect fluctuations at no more than 16 different moments (which are all the subsets of the given). If we have a fluctuation in the target moment B, it means that we have a solution of the problem, otherwise there is no subset whose sum of elements equals B. For the practical implementation we cannot have zero-length cables, thus all cables are increased with a small (fixed for all) value k. In this case the solution is expected at moment B+n*k.

Wavelength-based computing

Wavelength-based computing can be used to solve the 3-SAT problem with n variables, m clauses and with no more than 3 variables per clause. Each wavelength, contained in a light ray, is considered as possible value-assignments to n variables. The optical device contains prisms and mirrors are used to discriminate proper wavelengths which satisfy the formula.

Computing by xeroxing on transparencies

This approach uses a Xerox machine and transparent sheets for performing computations. k-SAT problem with n variables, m clauses and at most k variables per clause has been solved in 3 steps:

  • Firstly all 2^n possible assignments of n variables have been generated by performing n xerox copies.
  • Using at most 2k copies of the truth table, each clause is evaluated at every row of the truth table simultaneously.
  • The solution is obtained by making a single copy operation of the overlapped transparencies of all m clauses.

Masking optical beams

The travelling salesman problem has been solved in by using an optical approach. All possible TSP paths have been generated and stored in a binary matrix which was multiplied with another gray-scale vector containing the distances between cities. The multiplication is performed optically by using an optical correlator.

Optical Fourier co-processors

Many computations, particularly in scientific applications, require frequent use of the 2D discrete Fourier transform (DFT) – for example in solving differential equations describing propagation of waves or transfer of heat. Though modern GPU technologies typically enable high-speed computation of large 2D DFTs, techniques have been developed that can perform continuous Fourier transform optically by utilising the natural Fourier transforming property of lenses. The input is encoded using a liquid crystal spatial light modulator and the result is measured using a conventional CMOS or CCD image sensor. Such optical architectures can offer superior scaling of computational complexity due to the inherently highly interconnected nature of optical propagation, and have been used to solve 2D heat equations.

Ising machines

Physical computers whose design was inspired by the theoretical Ising model are called Ising machines.

Yoshihisa Yamamoto's lab at Stanford pioneered building Ising machines using photons. Initially Yamamoto and his colleagues built an Ising machine using lasers, mirrors, and other optical components commonly found on an optical table.

Later a team at Hewlett Packard Labs developed photonic chip design tools and used them to build an Ising machine on a single chip, integrating 1,052 optical components on that single chip.

Action at a distance

From Wikipedia, the free encyclopedia

In physics, action at a distance is the concept that an object can be moved, changed, or otherwise affected without being physically touched (as in mechanical contact) by another object. That is, it is the non-local interaction of objects that are separated in space.

This term was used most often in the context of early theories of gravity and electromagnetism to describe how an object responds to the influence of distant objects. For example, Coulomb's law and Newton's law of universal gravitation are such early theories.

More generally "action at a distance" describes the failure of early atomistic and mechanistic theories which sought to reduce all physical interaction to collision. The exploration and resolution of this problematic phenomenon led to significant developments in physics, from the concept of a field, to descriptions of quantum entanglement and the mediator particles of the Standard Model.

Electricity and magnetism

Philosopher William of Ockham discussed action at a distance to explain magnetism and the ability of the Sun to heat the Earth's atmosphere without affecting the intervening space.

Efforts to account for action at a distance in the theory of electromagnetism led to the development of the concept of a field which mediated interactions between currents and charges across empty space. According to field theory, we account for the Coulomb (electrostatic) interaction between charged particles through the fact that charges produce around themselves an electric field, which can be felt by other charges as a force. Maxwell directly addressed the subject of action-at-a-distance in chapter 23 of his A Treatise on Electricity and Magnetism in 1873. He began by reviewing the explanation of Ampère's formula given by Gauss and Weber. On page 437 he indicates the physicists' disgust with action at a distance. In 1845 Gauss wrote to Weber desiring "action, not instantaneous, but propagated in time in a similar manner to that of light". This aspiration was developed by Maxwell with the theory of an electromagnetic field described by Maxwell's equations, which used the field to elegantly account for all electromagnetic interactions, as well as light (which, until then, had been seen as a completely unrelated phenomenon). In Maxwell's theory, the field is its own physical entity, carrying momenta and energy across space, and action-at-a-distance is only the apparent effect of local interactions of charges with their surrounding field.

Electrodynamics was later described without fields (in Minkowski space) as the direct interaction of particles with lightlike separation vectors. This resulted in the Fokker-Tetrode-Schwarzschild action integral. This kind of electrodynamic theory is often called "direct interaction" to distinguish it from field theories where action at a distance is mediated by a localized field (localized in the sense that its dynamics are determined by the nearby field parameters). This description of electrodynamics, in contrast with Maxwell's theory, explains apparent action at a distance not by postulating a mediating entity (a field) but by appealing to the natural geometry of special relativity.

Direct interaction electrodynamics is explicitly symmetrical in time and avoids the infinite energy predicted in the field immediately surrounding point particles. Feynman and Wheeler have shown that it can account for radiation and radiative damping (which had been considered strong evidence for the independent existence of the field). However, various proofs, beginning with that of Dirac, have shown that direct interaction theories (under reasonable assumptions) do not admit Lagrangian or Hamiltonian formulations (these are the so-called No Interaction Theorems). Also significant is the measurement and theoretical description of the Lamb shift which strongly suggests that charged particles interact with their own field. Fields, because of these and other difficulties, have been elevated to the fundamental operators in Quantum Field Theory and Modern physics has thus largely abandoned direct interaction theory.

Gravity

Newton

Newton's classical theory of gravity offered no prospect of identifying any mediator of gravitational interaction. His theory assumed that gravitation acts instantaneously, regardless of distance. Kepler's observations gave strong evidence that in planetary motion angular momentum is conserved. (The mathematical proof is valid only in the case of a Euclidean geometry.) Gravity is also known as a force of attraction between two objects because of their mass.

From a Newtonian perspective, action at a distance can be regarded as "a phenomenon in which a change in intrinsic properties of one system induces a change in the intrinsic properties of a distant system, independently of the influence of any other systems on the distant system, and without there being a process that carries this influence contiguously in space and time" (Berkovitz 2008).

A related question, raised by Ernst Mach, was how rotating bodies know how much to bulge at the equator. This, it seems, requires an action-at-a-distance from distant matter, informing the rotating object about the state of the universe. Einstein coined the term Mach's principle for this question.

It is inconceivable that inanimate Matter should, without the Mediation of something else, which is not material, operate upon, and affect other matter without mutual Contact…That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. Gravity must be caused by an Agent acting constantly according to certain laws; but whether this Agent be material or immaterial, I have left to the Consideration of my readers.

— Isaac Newton, Letters to Bentley, 1692/3

Different authors have attempted to clarify the aspects of remote action and God’s involvement on the basis of textual investigations, mainly from the Mathematical Principles of Natural Philosophy, Newton’s correspondence with Richard Bentley (1692/93), and Queries that Newton introduced at the end of the Opticks book in the first three editions (between 1704 and 1721).

Andrew Janiak, in Newton as philosopher, considered that Newton denied that gravity could be essential to matter, dismissed direct action at a distance, and also rejected the idea of a material substance. But Newton agreed, in Janiak’s view, with an immaterial ether, which he considered that Newton identifies himself with God himself: “Newton obviously thinks that God might be the very “immaterial medium” underlying all gravitational interactions among material bodies.”

Steffen Ducheyne, in Newton on Action at a Distance, considered that Newton never accepted direct remote action, only material intervention or immaterial substance.

Hylarie Kochiras, in Gravity and Newton’s substance counting problem, argued that Newton was inclined to reject direct action, giving priority to the hypothesis of an intangible environment. But, in his speculative moments, Newton oscillated between accepting and rejecting direct remote action. Newton, according to Kochiras, claims that God is a virtual omnipresent, the force/agent must subsist in substance, and God is omnipresent substantially, resulting in a hidden premise, the principle of local action.

Eric Schliesser, in Newton’s substance monism, distant action, and the nature of Newton’s Empiricism, argued that Newton does not categorically refuse the idea that matter is active, and therefore accepted the possibility of a direct action at a distance. Newton affirms the virtual omnipresence of God in addition to his substantial omnipresence.

John Henry, in Gravity and De gravitatione: The Development of Newton’s Ideas on Action at a Distance, also argued that direct remote action was not inconceivable for Newton, rejecting the idea that gravity can be explained by subtle matter, accepting the idea of an omnipotent God, and rejecting the Epicurean attraction.

For further discussion see Ducheyne, S. "Newton on Action at a Distance". Journal of the History of Philosophy vol. 52.4 (2014): 675–702.

Einstein

According to Albert Einstein's theory of special relativity, instantaneous action at a distance violates the relativistic upper limit on speed of propagation of information. If one of the interacting objects were to suddenly be displaced from its position, the other object would feel its influence instantaneously, meaning information had been transmitted faster than the speed of light.

One of the conditions that a relativistic theory of gravitation must meet is that gravity is mediated with a speed that does not exceed c, the speed of light in a vacuum. From the previous success of electrodynamics, it was foreseeable that the relativistic theory of gravitation would have to use the concept of a field, or something similar.

This has been achieved by Einstein's theory of general relativity, in which gravitational interaction is mediated by deformation of space-time geometry. Matter warps the geometry of space-time, and these effects are—as with electric and magnetic fields—propagated at the speed of light. Thus, in the presence of matter, space-time becomes non-Euclidean, resolving the apparent conflict between Newton's proof of the conservation of angular momentum and Einstein's theory of special relativity.

Mach's question regarding the bulging of rotating bodies is resolved because local space-time geometry is informing a rotating body about the rest of the universe. In Newton's theory of motion, space acts on objects, but is not acted upon. In Einstein's theory of motion, matter acts upon space-time geometry, deforming it; and space-time geometry acts upon matter, by affecting the behavior of geodesics.

As a consequence, and unlike the classical theory, general relativity predicts that accelerating masses emit gravitational waves, i.e. disturbances in the curvature of spacetime that propagate outward at lightspeed. Their existence (like many other aspects of relativity) has been experimentally confirmed by astronomers—most dramatically in the direct detection of gravitational waves originating from a black hole merger when they passed through LIGO in 2015.

Quantum mechanics

Since the early twentieth century, quantum mechanics has posed new challenges for the view that physical processes should obey locality. Whether quantum entanglement counts as action-at-a-distance hinges on the nature of the wave function and decoherence, issues over which there is still considerable debate among scientists and philosophers.

One important line of debate originated with Einstein, who challenged the idea that quantum mechanics offers a complete description of reality, along with Boris Podolsky and Nathan Rosen. They proposed a thought experiment involving an entangled pair of observables with non-commuting operators (e.g. position and momentum).

This thought experiment, which came to be known as the EPR paradox, hinges on the principle of locality. A common presentation of the paradox is as follows: two particles interact and fly off in opposite directions. Even when the particles are so far apart that any classical interaction would be impossible (see principle of locality), a measurement of one particle nonetheless determines the corresponding result of a measurement of the other.

After the EPR paper, several scientists such as de Broglie studied local hidden variables theories. In the 1960s John Bell derived an inequality that indicated a testable difference between the predictions of quantum mechanics and local hidden variables theories. To date, all experiments testing Bell-type inequalities in situations analogous to the EPR thought experiment have results consistent with the predictions of quantum mechanics, suggesting that local hidden variables theories can be ruled out. Whether or not this is interpreted as evidence for nonlocality depends on one's interpretation of quantum mechanics.

Non-standard interpretations of quantum mechanics vary in their response to the EPR-type experiments. The Bohm interpretation gives an explanation based on nonlocal hidden variables for the correlations seen in entanglement. Many advocates of the many-worlds interpretation argue that it can explain these correlations in a way that does not require a violation of locality, by allowing measurements to have non-unique outcomes.

If "action" is defined as a force, physical work or information, then it should be stated clearly that entanglement cannot communicate action between two entangled particles (Einstein's worry about "spooky action at a distance" does not actually violate special relativity). What happens in entanglement is that a measurement on one entangled particle yields a random result, then a later measurement on another particle in the same entangled (shared) quantum state must always yield a value correlated with the first measurement. Since no force, work, or information is communicated (the first measurement is random), the speed of light limit does not apply (see Quantum entanglement and Bell test experiments). In the standard Copenhagen interpretation, as discussed above, entanglement demonstrates a genuine nonlocal effect of quantum mechanics, but does not communicate information, either quantum or classical.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...