Search This Blog

Friday, April 3, 2026

Measurement in quantum mechanics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics

In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic.

The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about local hidden variables within quantum systems.

Measuring a quantum system generally changes the quantum state that describes that system. This is a central feature of quantum mechanics, one that is both mathematically intricate and conceptually subtle. The mathematical tools for making predictions about what measurement outcomes may occur, and how quantum states can change, were developed during the 20th century and make use of linear algebra and functional analysis. Quantum physics has proven to be an empirical success and to have wide-ranging applicability.

On a more philosophical level, debates continue about the meaning of the measurement concept. The different interpretations of quantum mechanics, concern of solving what is known as the measurement problem.

Mathematical formalism

"Observables" as self-adjoint operators

In quantum mechanics, each physical system is associated with a Hilbert space, each element of which represents a possible state of the physical system. The approach codified by John von Neumann represents a measurement upon a physical system by a self-adjoint operator on that Hilbert space termed an "observable". These observables play the role of measurable quantities familiar from classical physics: position, momentum, energy, angular momentum and so on. The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. Many treatments of the theory focus on the finite-dimensional case, as the mathematics involved is somewhat less demanding. Indeed, introductory physics texts on quantum mechanics often gloss over mathematical technicalities that arise for continuous-valued observables and infinite-dimensional Hilbert spaces, such as the distinction between bounded and unbounded operators; questions of convergence (whether the limit of a sequence of Hilbert-space elements also belongs to the Hilbert space), exotic possibilities for sets of eigenvalues, like Cantor sets; and so forth. These issues can be satisfactorily resolved using spectral theory; the present article will avoid them whenever possible.

Projective measurement

The eigenvectors of a von Neumann observable form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. For each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that

where is the density operator, and is the projection operator onto the basis vector corresponding to the measurement outcome . The average of the eigenvalues of a von Neumann observable, weighted by the Born rule probabilities, is the expectation value of that observable. For an observable , the expectation value given a quantum state is

A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., for some outcome ). Any mixed state can be written as a convex combination of pure states, though not in a unique way. The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it.

The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator.

Generalized measurement (POVM)

In functional analysis and quantum measurement theory, a positive-operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalisation of projection-valued measures (PVMs) and, correspondingly, quantum measurements described by POVMs are a generalisation of quantum measurement described by PVMs. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see Schrödinger–HJW theorem); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information.

In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices on a Hilbert space that sum to the identity matrix,

In quantum mechanics, the POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by

,

where is the trace operator. When the quantum state being measured is a pure state this formula reduces to

.

State change due to measurement

A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process. To remedy this, further information is specified by decomposing each POVM element into a product:

The Kraus operators , named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products are. If upon performing the measurement the outcome is obtained, then the initial state is updated to

An important special case is the Lüders rule, named for Gerhart Lüders.  If the POVM is itself a PVM, then the Kraus operators can be taken to be the projectors onto the eigenspaces of the von Neumann observable:

If the initial state is pure, and the projectors have rank 1, they can be written as projectors onto the vectors and , respectively. The formula simplifies thus to

Lüders rule has historically been known as the "reduction of the wave packet" or the "collapse of the wavefunction". The pure state implies a probability-one prediction for any von Neumann observable that has as an eigenvector. Introductory texts on quantum theory often express this by saying that if a quantum measurement is repeated in quick succession, the same outcome will occur both times. This is an oversimplification, since the physical implementation of a quantum measurement may involve a process like the absorption of a photon; after the measurement, the photon does not exist to be measured again.

We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation:

It is an example of a quantum channel, and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost.

Examples

Bloch sphere representation of states (in blue) and optimal POVM (in red) for unambiguous quantum state discrimination on the states and . Note that on the Bloch sphere orthogonal states are antiparallel.

The prototypical example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. A pure state for a qubit can be written as a linear combination of two orthogonal basis states and with complex coefficients:

A measurement in the basis will yield outcome with probability and outcome with probability , so by normalization,

An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for self-adjoint matrices:

where the real numbers are the coordinates of a point within the unit ball and

POVM elements can be represented likewise, though the trace of a POVM element is not fixed to equal 1. The Pauli matrices are traceless and orthogonal to one another with respect to the Hilbert–Schmidt inner product, and so the coordinates of the state are the expectation values of the three von Neumann measurements defined by the Pauli matrices. If such a measurement is applied to a qubit, then by the Lüders rule, the state will update to the eigenvector of that Pauli matrix corresponding to the measurement outcome. The eigenvectors of are the basis states and , and a measurement of is often called a measurement in the "computational basis." After a measurement in the computational basis, the outcome of a or measurement is maximally uncertain.

A pair of qubits together form a system whose Hilbert space is 4-dimensional. One significant von Neumann measurement on this system is that defined by the Bell basis, a set of four maximally entangled states:

Probability density for the outcome of a position measurement given the energy eigenstate of a 1D harmonic oscillator

A common and useful example of quantum mechanics applied to a continuous degree of freedom is the quantum harmonic oscillator. This system is defined by the Hamiltonian

where , the momentum operator and the position operator are self-adjoint operators on the Hilbert space of square-integrable functions on the real line. The energy eigenstates solve the time-independent Schrödinger equation:

These eigenvalues can be shown to be given by

and these values give the possible numerical outcomes of an energy measurement upon the oscillator. The set of possible outcomes of a position measurement on a harmonic oscillator is continuous, and so predictions are stated in terms of a probability density function that gives the probability of the measurement outcome lying in the infinitesimal interval from to .

History of the measurement concept

The "old quantum theory"

The old quantum theory is a collection of results from the years 1900–1925 which predate modern quantum mechanics. The theory was never complete or self-consistent, but was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include Max Planck's calculation of the blackbody radiation spectrum, Albert Einstein's explanation of the photoelectric effect, Einstein and Peter Debye's work on the specific heat of solids, Niels Bohr and Hendrika van Leeuwen's proof that classical physics cannot account for magnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects.

Stern–Gerlach experiment: Silver atoms travelling through an inhomogeneous magnetic field, and being deflected up or down depending on their spin; (1) furnace, (2) beam of silver atoms, (3) inhomogeneous magnetic field, (4) classically expected result, (5) observed result.

The Stern–Gerlach experiment, proposed in 1921 and implemented in 1922, became a prototypical example of a quantum measurement having a discrete set of possible outcomes. In the original experiment, silver atoms were sent through a spatially varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment are deflected, due to the magnetic field gradient, from a straight path. The screen reveals discrete points of accumulation, rather than a continuous distribution, owing to the particles' quantized spin.

Transition to the "new" quantum theory

A 1925 paper by Werner Heisenberg, known in English as "Quantum theoretical re-interpretation of kinematic and mechanical relations", marked a pivotal moment in the maturation of quantum physics. Heisenberg sought to develop a theory of atomic phenomena that relied only on "observable" quantities. At the time, and in contrast with the later standard presentation of quantum mechanics, Heisenberg did not regard the position of an electron bound within an atom as "observable". Instead, his principal quantities of interest were the frequencies of light emitted or absorbed by atoms.

The uncertainty principle dates to this period. It is frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment where one attempts to measure an electron's position and momentum simultaneously. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position-momentum uncertainty principle is due to Earle Hesse Kennard, Wolfgang Pauli, and Hermann Weyl, and its generalization to arbitrary pairs of noncommuting observables is due to Howard P. Robertson and Erwin Schrödinger.

Writing and for the self-adjoint operators representing position and momentum respectively, a standard deviation of position can be defined as

and likewise for the momentum:

The Kennard–Pauli–Weyl uncertainty relation is

This inequality means that no preparation of a quantum particle can imply simultaneously precise predictions for a measurement of position and for a measurement of momentum. The Robertson inequality generalizes this to the case of an arbitrary pair of self-adjoint operators and . The commutator of these two operators is

and this provides the lower bound on the product of standard deviations:

Substituting in the canonical commutation relation , an expression first postulated by Max Born in 1925,[37] recovers the Kennard–Pauli–Weyl statement of the uncertainty principle.

From uncertainty to no-hidden-variables

The existence of the uncertainty principle naturally raises the question of whether quantum mechanics can be understood as an approximation to a more exact theory. Do there exist "hidden variables", more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide? A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics.

John Stewart Bell published the theorem now known by his name in 1964, investigating more deeply a thought experiment originally proposed in 1935 by Einstein, Boris Podolsky and Nathan Rosen.According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics. Many types of Bell test have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". To date, Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.

Quantum systems as measuring devices

The Robertson–Schrödinger uncertainty principle establishes that when two observables do not commute, there is a tradeoff in predictability between them. The Wigner–Araki–Yanase theorem demonstrates another consequence of non-commutativity: the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. Further investigation in this line led to the formulation of the Wigner–Yanase skew information.

Historically, experiments in quantum physics have often been described in semiclassical terms. For example, the spin of an atom in a Stern–Gerlach experiment might be treated as a quantum degree of freedom, while the atom is regarded as moving through a magnetic field described by the classical theory of Maxwell's equations. But the devices used to build the experimental apparatus are themselves physical systems, and so quantum mechanics should be applicable to them as well. Beginning in the 1950s, Léon Rosenfeld, Carl Friedrich von Weizsäcker and others tried to develop consistency conditions that expressed when a quantum-mechanical system could be treated as a measuring apparatus. One proposal for a criterion regarding when a system used as part of a measuring device can be modeled semiclassically relies on the Wigner function, a quasiprobability distribution that can be treated as a probability distribution on phase space in those cases where it is everywhere non-negative.

Decoherence

A quantum state for an imperfectly isolated system will generally evolve to be entangled with the quantum state for the environment. Consequently, even if the system's initial state is pure, the state at a later time, found by taking the partial trace of the joint system-environment state, will be mixed. This phenomenon of entanglement produced by system-environment interactions tends to obscure the more exotic features of quantum mechanics that the system could in principle manifest. Quantum decoherence, as this effect is known, was first studied in detail during the 1970s. (Earlier investigations into how classical physics might be obtained as a limit of quantum mechanics had explored the subject of imperfectly isolated systems, but the role of entanglement was not fully appreciated.) A significant portion of the effort involved in quantum computing research is to avoid the deleterious effects of decoherence.

To illustrate, let denote the initial state of the system, the initial state of the environment and the Hamiltonian specifying the system-environment interaction. The density operator can be diagonalized and written as a linear combination of the projectors onto its eigenvectors:

Expressing time evolution for a duration by the unitary operator , the state for the system after this evolution is

which evaluates to

The quantities surrounding can be identified as Kraus operators, and so this defines a quantum channel.

Specifying a form of interaction between system and environment can establish a set of "pointer states," states for the system that are (approximately) stable, apart from overall phase factors, with respect to environmental fluctuations. A set of pointer states defines a preferred orthonormal basis for the system's Hilbert space.

Quantum information and computation

Quantum information science studies how information science and its application as technology depend on quantum-mechanical phenomena. Understanding measurement in quantum physics is important for this field in many ways, some of which are briefly surveyed here.

Measurement, entropy, and distinguishability

The von Neumann entropy is a measure of the statistical uncertainty represented by a quantum state. For a density matrix , the von Neumann entropy is

writing in terms of its basis of eigenvectors,

the von Neumann entropy is

This is the Shannon entropy of the set of eigenvalues interpreted as a probability distribution, and so the von Neumann entropy is the Shannon entropy of the random variable defined by measuring in the eigenbasis of . Consequently, the von Neumann entropy vanishes when is pure. The von Neumann entropy of can equivalently be characterized as the minimum Shannon entropy for a measurement given the quantum state , with the minimization over all POVMs with rank-1 elements.

Many other quantities used in quantum information theory also find motivation and justification in terms of measurements. For example, the trace distance between quantum states is equal to the largest difference in probability that those two quantum states can imply for a measurement outcome:

Similarly, the fidelity of two quantum states, defined by

expresses the probability that one state will pass a test for identifying a successful preparation of the other. The trace distance provides bounds on the fidelity via the Fuchs–van de Graaf inequalities:

Quantum circuits

Circuit representation of measurement. The single line on the left-hand side stands for a qubit, while the two lines on the right-hand side represent a classical bit.

Quantum circuits are a model for quantum computation in which a computation is a sequence of quantum gates followed by measurements. The gates are reversible transformations on a quantum mechanical analog of an n-bit register. This analogous structure is referred to as an n-qubit register. Measurements, drawn on a circuit diagram as stylized pointer dials, indicate where and how a result is obtained from the quantum computer after the steps of the computation are executed. Without loss of generality, one can work with the standard circuit model, in which the set of gates are single-qubit unitary transformations and controlled NOT gates on pairs of qubits, and all measurements are in the computational basis.

Measurement-based quantum computation

Measurement-based quantum computation (MBQC) is a model of quantum computing in which the answer to a question is, informally speaking, created in the act of measuring the physical system that serves as the computer.

Quantum tomography

Quantum state tomography is a process by which, given a set of data representing the results of quantum measurements, a quantum state consistent with those measurement results is computed. It is named by analogy with tomography, the reconstruction of three-dimensional images from slices taken through them, as in a CT scan. Tomography of quantum states can be extended to tomography of quantum channels and even of measurements.

Quantum metrology

Quantum metrology is the use of quantum physics to aid the measurement of quantities that, generally, had meaning in classical physics, such as exploiting quantum effects to increase the precision with which a length can be measured. A celebrated example is the introduction of squeezed light into the LIGO experiment, which increased its sensitivity to gravitational waves.

Laboratory implementations

The range of physical procedures to which the mathematics of quantum measurement can be applied is very broad. In the early years of the subject, laboratory procedures involved the recording of spectral lines, the darkening of photographic film, the observation of scintillations, finding tracks in cloud chambers, and hearing clicks from Geiger counters. Language from this era persists, such as the description of measurement outcomes in the abstract as "detector clicks".

The double-slit experiment is a prototypical illustration of quantum interference, typically described using electrons or photons. The first interference experiment to be carried out in a regime where both wave-like and particle-like aspects of photon behavior are significant was G. I. Taylor's test in 1909. Taylor used screens of smoked glass to attenuate the light passing through his apparatus, to the extent that, in modern language, only one photon would be illuminating the interferometer slits at a time. He recorded the interference patterns on photographic plates; for the dimmest light, the exposure time required was roughly three months. In 1974, the Italian physicists Pier Giorgio Merli [it], Gian Franco Missiroli, and Giulio Pozzi implemented the double-slit experiment using single electrons and a television tube. A quarter-century later, a team at the University of Vienna performed an interference experiment with buckyballs, in which the buckyballs that passed through the interferometer were ionized by a laser, and the ions then induced the emission of electrons, emissions which were in turn amplified and detected by an electron multiplier.

Modern quantum optics experiments can employ single-photon detectors. For example, in the "BIG Bell test" of 2018, several of the laboratory setups used single-photon avalanche diodes. Another laboratory setup used superconducting qubits. The standard method for performing measurements upon superconducting qubits is to couple a qubit with a resonator in such a way that the characteristic frequency of the resonator shifts according to the state for the qubit, and detecting this shift by observing how the resonator reacts to a probe signal.

Interpretations of quantum mechanics

Niels Bohr and Albert Einstein, pictured here at Paul Ehrenfest's home in Leiden (December 1925), had a long-running collegial dispute about what quantum mechanics implied for the nature of reality.

Despite the consensus among scientists that quantum physics is in practice a successful theory, disagreements persist on a more philosophical level. Many debates in the area known as quantum foundations concern the role of measurement in quantum mechanics. Recurring questions include which interpretation of probability theory is best suited for the probabilities calculated from the Born rule; and whether the apparent randomness of quantum measurement outcomes is fundamental, or a consequence of a deeper deterministic process. Worldviews that present answers to questions like these are known as "interpretations" of quantum mechanics; as the physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear."

A central concern within quantum foundations is the "quantum measurement problem," though how this problem is delimited, and whether it should be counted as one question or multiple separate issues, are contested topics. Of primary interest is the seeming disparity between apparently distinct types of time evolution. Von Neumann declared that quantum mechanics contains "two fundamentally different types" of quantum-state change. First, there are those changes involving a measurement process, and second, there is unitary time evolution in the absence of measurement. The former is stochastic and discontinuous, writes von Neumann, and the latter deterministic and continuous. This dichotomy has set the tone for much later debate. Some interpretations of quantum mechanics find the reliance upon two different types of time evolution distasteful and regard the ambiguity of when to invoke one or the other as a deficiency of the way quantum theory was historically presented. To bolster these interpretations, their proponents have worked to derive ways of regarding "measurement" as a secondary concept and deducing the seemingly stochastic effect of measurement processes as approximations to more fundamental deterministic dynamics. However, consensus has not been achieved among proponents of the correct way to implement this program, and in particular how to justify the use of the Born rule to calculate probabilities. Other interpretations regard quantum states as statistical information about quantum systems, thus asserting that abrupt and discontinuous changes of quantum states are not problematic, simply reflecting updates of the available information. Of this line of thought, Bell asked, "Whose information? Information about what?" Answers to these questions vary among proponents of the informationally-oriented interpretations.

Technological utopianism

From Wikipedia, the free encyclopedia
A NASA poster about a fictional Mars tour. Technological advances in space travel is often a theme in utopias.

Technological utopianism, often called techno-utopianism or technoutopianism, is any ideology based on the premise that advances in science and technology could and should bring about a utopia, or at least help to fulfill one or another utopian ideal. A techno-utopia is therefore an ideal society, in which laws, government, and social conditions are solely operating for the benefit and well-being of all its citizens, set in the near- or far-future, as advanced science and technology will allow these ideal living standards to exist; for example, post-scarcity, transformations in human nature, the avoidance or prevention of suffering and even the end of death. Technological utopianism is often connected with other discourses presenting technologies as agents of social and cultural change, such as technological determinism or media imaginaries.

A tech-utopia does not disregard any problems that technology may cause, but strongly believes that technology allows mankind to make social, economic, political, and cultural advancements. Overall, Technological Utopianism views technology's impacts as extremely positive. In the late 20th and early 21st centuries, several ideologies and movements, such as the cyberdelic counterculture, the Californian Ideology, cyber-utopianism, transhumanism, and singularitarianism, have emerged promoting a form of techno-utopia as a reachable goal. The movement known as effective accelerationism (e/acc) even advocates for "progress at all costs". Cultural critic Imre Szeman argues technological utopianism is an irrational social narrative because there is no evidence to support it. He concludes that it shows the extent to which modern societies place faith in narratives of progress and technology overcoming things, despite all evidence to the contrary.

History

From the 19th to mid-20th centuries

Karl Marx believed that science and democracy were the right and left hands of what he called the move from the realm of necessity to the realm of freedom. He argued that advances in science helped delegitimize the rule of kings and the power of the Christian Church. 19th-century liberals, socialists, and republicans often embraced techno-utopianism. Radicals like Joseph Priestley pursued scientific investigation while advocating democracy. Robert Owen, Charles Fourier and Henri de Saint-Simon in the early 19th century inspired communalists with their visions of a future scientific and technological evolution of humanity using reason. Radicals seized on Darwinian evolution to validate the idea of social progress. Edward Bellamy's socialist utopia in Looking Backward, which inspired hundreds of socialist clubs in the late 19th century United States and a national political party, was as highly technological as Bellamy's imagination. For Bellamy and the Fabian Socialists, socialism was to be brought about as a painless corollary of industrial development.

Marx and Freidrich Engels saw more pain and conflict involved, but agreed about the inevitable end. Marxists argued that the advance of technology laid the groundwork not only for the creation of a new society, with different property relations, but also for the emergence of new human beings reconnected to nature and themselves. At the top of the agenda for empowered proletarians was "to increase the total productive forces as rapidly as possible". The 19th and early 20th century Left, from social democrats to communists, were focused on industrialization, economic development and the promotion of reason, science, and the idea of progress.

According to historian Asif Siddiqi, technological utopianism was a "millenarian mantra" in the Soviet Union from its inception. The Bolsheviks imagined "a world of magnificent factories and mechanized agriculture that produced all of society's necessities," a new socialist machine age. Siddiqi writes that "this obsession with the power of science and technology to remake society was partly rooted in crude Marxism, but much of it derived from the Bolsheviks' own vision to remake Russia into a modern state, one which would compare and compete with the leading capitalist nations in forging a new path to the future." From the 1930s onwards, Soviet technological utopianism embraced a populist view of technological achievements, which Siddiqi summarizes as "technology for the masses." Soviet science fiction was heavily focused on future technology, and often depicted a convergence between technological utopia and socialist utopia.

Sovietologist Paul Josephson argued that most strains of Soviet technological utopianism emphasized technology was apolitical, "serving the profit motive and the industrialist under capitalism, but benefiting all humanity under socialism." To avoid technological dependence on capitalist states, the Soviet Union and other socialist governments influenced by its narratives sought to create domestic technological innovations, supported by autarkic engineering communities and supply chains.

Some technological utopians promoted eugenics. Holding that in studies of families, such as the Jukes and Kallikaks, science had proven that many traits such as criminality and alcoholism were hereditary, many advocated the sterilization of those displaying negative traits. Forcible sterilization programs were implemented in several states in the United States. H. G. Wells in works such as The Shape of Things to Come promoted technological utopianism. To many philosophers, the horrors of World War II and the Holocaust, as Theodor Adorno underlined, seemed to shatter the ideal of Condorcet and other thinkers of the Enlightenment, which commonly equated scientific progress with social progress.

From late 20th and early 21st centuries

The Goliath of totalitarianism will be brought down by the David of the microchip.

— Ronald Reagan, 14 June 1989

A movement of techno-utopianism began to flourish again in the dot-com culture of the 1990s, particularly in the West Coast of the United States, especially based around Silicon Valley. The Californian Ideology was a set of beliefs combining bohemian and anti-authoritarian attitudes from the counterculture of the 1960s with techno-utopianism and support for libertarian economic policies. It was reflected in, reported on, and even actively promoted in the pages of Wired magazine, which was founded in San Francisco in 1993 and served for a number years as the "bible" of its adherents.

This form of techno-utopianism reflected a belief that technological change revolutionizes human affairs, and that digital technology in particular – of which the Internet was but a modest harbinger – would increase personal freedom by freeing the individual from the rigid embrace of bureaucratic big government. "Self-empowered knowledge workers" would render traditional hierarchies redundant; digital communications would allow them to escape the modern city, an "obsolete remnant of the industrial age".

Similar forms of "digital utopianism" has often entered in the political messages of party and social movements that point to the Web or more broadly to new media as harbingers of political and social change. Its adherents claim it transcended conventional "right/left" distinctions in politics by rendering politics obsolete. However, Western techno-utopianism disproportionately attracted adherents from the libertarian right end of the political spectrum. Western techno-utopians often have a hostility toward government regulation and a belief in the superiority of the free market system. Prominent "oracles" of techno-utopianism included George Gilder and Kevin Kelly, an editor of Wired who also published several books.

During the late 1990s dot-com boom, when the speculative bubble gave rise to claims that an era of "permanent prosperity" had arrived, techno-utopianism flourished, typically among the small percentage of the population who were employees of Internet startups and/or owned large quantities of high-tech stocks. With the subsequent crash, many of these dot-com techno-utopians had to rein in some of their beliefs in the face of the clear return of traditional economic reality. According to The Economist, Wikipedia "has its roots in the techno-optimism that characterised the internet at the end of the 20th century. It held that ordinary people could use their computers as tools for liberation, education, and enlightenment."

In the late 1990s and especially during the first decade of the 21st century, technorealism and techno-progressivism are stances that have risen among advocates of technological change as critical alternatives to techno-utopianism. However, technological utopianism persists in the 21st century as a result of new technological developments and their impact on society. For example, several technical journalists and social commentators, such as Mark Pesce, have interpreted the WikiLeaks phenomenon and the United States diplomatic cables leak in early December 2010 as a precursor to, or an incentive for, the creation of a techno-utopian transparent societyCyber-utopianism, first coined by Evgeny Morozov, is another manifestation of this, in particular in relation to the Internet and social networking.

Nick Bostrom contends that the rise of machine superintelligence carries both existential risks and an extreme potential to improve the future, which might be realized quickly in the event of an intelligence explosion. In Deep Utopia: Life and Meaning in a Solved World, he further explored ideal scenarios where human civilization reaches technological maturity and solves its diverse coordination problems. He listed some technologies that are theoretically achievable, such as cognitive enhancement, reversal of aging, self-replicating spacecrafts, arbitrary sensory inputs (taste, sound...), or the precise control of motivation, mood, well-being and personality.

In North Korea, technological utopianism remains one of the key themes of the state's Juche ideology. The pursuit of advanced strategic technologies is promoted as an integral part of autarkic economic development. North Korean technological utopianism essentially rests on three narratives: the rejection of consumer society and culture, an emphasis on heavy industry, and a belief in the ability of the masses of workers to make great technological achievements under the Workers' Party of Korea. In practice, this has resulted in most of North Korea's technological resources being utilized for large scale, resource intensive, infrastructure and military projects, many of which have primarily symbolic importance. Domestic innovations in nuclear and space sciences continue to play a major role in the state's propaganda narratives, which seek to portray North Korea as a modern regional power.

Principles

Bernard Gendron, a professor of philosophy at the University of Wisconsin–Milwaukee, defines the four principles of modern technological utopians in the late 20th and early 21st centuries as follows:

  1. We are presently undergoing a (post-industrial) revolution in technology;
  2. In the post-industrial age, technological growth will be sustained (at least);
  3. In the post-industrial age, technological growth will lead to the end of economic scarcity;
  4. The elimination of economic scarcity will lead to the elimination of every major social evil.

Rushkoff presents us with multiple claims that surround the basic principles of Technological Utopianism:

  1. Technology reflects and encourages the best aspects of human nature, fostering "communication, collaboration, sharing, helpfulness, and community".
  2. Technology improves our interpersonal communication, relationships, and communities. Early Internet users shared their knowledge of the Internet with others around them.
  3. Technology democratizes society. The expansion of access to knowledge and skills led to the connection of people and information. The broadening of freedom of expression created "the online world...in which we are allowed to voice our own opinions". The reduction of the inequalities of power and wealth meant that everyone has an equal status on the internet and is allowed to do as much as the next person.
  4. Technology inevitably progresses. The interactivity that came from the inventions of the TV remote control, video game joystick, computer mouse and computer keyboard allowed for much more progress.
  5. Unforeseen impacts of technology are positive. As more people discovered the Internet, they took advantage of being linked to millions of people, and turned the Internet into a social revolution. The government released it to the public, and its "social side effect… [became] its main feature".
  6. Technology increases efficiency and consumer choice. The creation of the TV remote, video game joystick, and computer mouse liberated these technologies and allowed users to manipulate and control them, giving them many more choices.
  7. New technology can solve the problems created by old technology. Social networks and blogs were created out of the collapse of dot.com bubble businesses' attempts to run pyramid schemes on users.

Criticisms

Critics claim that techno-utopianism's identification of social progress with scientific progress is a form of positivism and scientism. Critics of modern libertarian techno-utopianism point out that it tends to focus on "government interference" while dismissing the positive effects of the regulation of business. They also point out that it has little to say about the environmental impact of technology and that its ideas have little relevance for much of the rest of the world that are still relatively quite poor (see global digital divide). In his 2010 study System Failure: Oil, Futurity, and the Anticipation of Disaster, Canada Research Chairholder in cultural studies Imre Szeman argues that technological utopianism is one of the social narratives that prevent people from acting on the knowledge they have concerning the effects of oil on the environment.

Another concern is the amount of reliance society may place on their technologies in these techno-utopia settings. For example, In a controversial 2011 article "Techno-Utopians are Mugged by Reality", L. Gordon Crovitz of The Wall Street Journal explored the concept of the violation of free speech by shutting down social media to stop violence. As a result of a wave of British cities being looted, former British Prime Minister David Cameron argued that the government should have the ability to shut down social media during crime sprees so that the situation could be contained. A poll was conducted to see if Twitter users would prefer to let the service be closed temporarily or keep it open so they could chat about the famous television show The X-Factor. The end report showed that every respondent opted for The X-Factor discussion. Clovitz contends that the negative social effect of technological utopia is that society is so addicted to technology that humanity simply cannot be parted from it even for the greater good. While many techno-utopians would like to believe that digital technology is for the greater good, he says it can also be used negatively to bring harm to the public. These two criticisms are sometimes referred to as a technological anti-utopian view or a techno-dystopia.

According to Ronald Adler and Russell Proctor, mediated communication such as phone calls, instant messaging and text messaging are steps towards a utopian world in which one can easily contact another regardless of time or location. However, mediated communication removes many aspects that are helpful in transferring messages. As it stands as of 2022, most text, email, and instant messages offer fewer nonverbal cues about the speaker's feelings than do face-to-face encounters. This makes it so that mediated communication can easily be misconstrued and the intended message is not properly conveyed. With the absence of tone, body language, and environmental context, the chance of a misunderstanding is much higher, rendering the communication ineffective. In fact, mediated technology can be seen from a dystopian view because it can be detrimental to effective interpersonal communication. These criticisms would only apply to messages that are prone to misinterpretation as not every text based communication requires contextual cues. The limitations of lacking tone and body language in text-based communication could potentially be mitigated by video and augmented reality versions of digital communication technologies.

In 2019, philosopher Nick Bostrom introduced the notion of a vulnerable world, "one in which there is some level of technological development at which civilization almost certainly gets devastated by default", citing the risks of a pandemic caused by a DIY biohacker, or an arms race triggered by the development of novel armaments. He writes that "Technology policy should not unquestioningly assume that all technological progress is beneficial, or that complete scientific openness is always best, or that the world has the capacity to manage any potential downside of a technology after it is invented."

Fusion mechanism

From Wikipedia, the free encyclopedia

A fusion mechanism is any mechanism by which cell fusion or virus–cell fusion takes place, as well as the machinery that facilitates these processes. Cell fusion is the formation of a hybrid cell from two separate cells. There are three major actions taken in both virus–cell fusion and cell–cell fusion: the dehydration of polar head groups, the promotion of a hemifusion stalk, and the opening and expansion of pores between fusing cells. Virus–cell fusions occur during infections of several viruses that are health concerns relevant today. Some of these include HIV, Ebola, and influenza. For example, HIV infects by fusing with the membranes of immune system cells. In order for HIV to fuse with a cell, it must be able to bind to the receptors CD4, CCR5, and CXCR4. Cell fusion also occurs in a multitude of mammalian cells including gametes and myoblasts.

Viral mechanisms

Fusogens

Proteins that allow viral or cell membranes to overcome barriers to fusion are called fusogens. Fusogens involved in virus-to-cell fusion mechanisms were the first of these proteins to be discovered. Viral fusion proteins are necessary for membrane fusion to take place. There is evidence that ancestral species of mammals may have incorporated these same proteins into their own cells as a result of infection. For this reason, similar mechanisms and machinery are utilized in cell–cell fusion.

In response to certain stimuli, such as low pH or binding to cellular receptors, these fusogens will change conformation. The conformation change allows the exposure of hydrophobic regions of the fusogens that would normally be hidden internally due to energetically unfavorable interactions with the cytosol or extracellular fluid. These hydrophobic regions are known as fusion peptides or fusion loops, and they are responsible for causing localized membrane instability and fusion. Scientists have found the following four classes of fusogens to be involved with virus–cell or cell–cell fusions.

Class I fusogens

These fusogens are trimeric, meaning they are made of three subunits. Their fusion loops are hidden internally at the junctions of the monomers before fusion takes place. Once fusion is complete, they refold into a different trimeric structure than the structure they had before fusion. These fusogens are characterized by a group of six α-helices in their post-fusion structure. This class of fusogens contains some of the proteins utilized by influenza, HIV, coronaviruses, and Ebola during infection. This class of fusogens also includes syncytins, which are utilized in mammalian cell fusions.

Class II fusogens

Class II fusogens contain multiple β-pleated sheets. These proteins are also trimeric and take part in the insertion of fusion loops into the target membrane. Their conformation changes can be triggered by exposure to acidic environments. Class II fusogens have a structure distinct from Class I fusogens, but similarly lower the energy barrier for membrane fusion. Class I fusogens are involved in flaviviruses (tick-borne encephalitis); alphaviruses (Semliki Forest virus, Sindbis virus, chikungunya and rubella); and phleboviruses (Rift Valley fever virus and Uukuniemi virus).

Class III fusogens

Class III fusogens are involved with virus–cell fusions. Much like fusogens in the previous two classes, these proteins are trimeric. However, they contain both α-helices and β-pleated sheets. During cell fusion the monomers of these proteins will dissociate but will return to a different trimeric structure after the fusion is complete. They are also involved in the insertion of fusion loops in the membrane.

Class IV fusogens

These reoviral cell–cell fusogens contain fusion loops that can induce cell fusion. They form polymeric structures to induce fusion of membranes. Reoviruses do not have membranes themselves, so class IV fusogens are not usually involved in traditional virus–cell fusion. However, when they are expressed on the surface of cells, they can induce cell–cell fusion.

Class I–III mechanism

The fusogens of classes I–III have many structural differences. However, the method they utilize to induce membrane fusion is mechanistically similar. When activated, all of these fusogens form elongated trimeric structures and bury their fusion peptides into the membrane of the target cell. They are secured in the viral membrane by hydrophobic trans-membrane regions. These fusogens will then fold in on themselves forming a structure that is reminiscent of a hairpin. This folding action brings the transmembrane region and the fusion loop adjacent to each other. Consequently, the viral membrane and the target cell membrane are also pulled close together. As the membranes are brought closer together, they are dehydrated, which allows the membranes to be brought into contact. Interactions between hydrophobic amino-acid residues and the adjacent membranes cause destabilization of the membranes. This allows the phospholipids in the outer layer of each membrane to interact with each other. The outer leaflets of the two membranes form a hemifusion stalk to minimize energetically unfavorable interactions between hydrophobic phospholipid tails and the environment. This stalk expands, allowing the inner leaflets of each membrane to interact. These inner leaflets then fuse, forming a fusion pore. At this point, the cytoplasmic components of the cell and the virus begin to mix. As the fusion pore expands, virus–cell fusion is completed.

Mammalian cell fusion mechanisms

Though there is much variation in different fusions between mammalian cells, there are five stages that occur in a majority of these fusion events: "programming fusion-competent status, chemotaxis, membrane adhesion, membrane fusion, and post-fusion resetting."

Programming fusion-competent status

This first step, also known as priming, encompasses the necessary events that must take place in order for cells to gain the ability to fuse. In order for a cell to become fusion-competent, it must manipulate the make-up of its membrane to facilitate membrane fusion. It also must construct necessary proteins to mediate fusion. Finally, it must eliminate hindrances to fusion. For example, a cell might free itself from the extracellular matrix in order to allow the cell more motility to facilitate fusion.

Monocytes, macrophages, and osteoclasts

Monocytes and macrophages can become fusion-competent in response to cytokines, which are protein-signalling molecules. Some interleukins prompt monocytes and macrophages to fuse to form foreign-body giant cells as part of a body's immune response. For example, interleukin-4 can promote the activation of transcription factor STAT6 by phosphorylation. This can then trigger expression of matrix metalloproteinase 9 (MMP9). MMP9 can degrade proteins in the extracellular matrix, which aids in the priming of macrophages for fusion.

Osteoclasts are multinucleated bone-resorbing cells. They are formed by the fusion of differentiated monocytes, much like foreign-body giant cells. However, the molecules that induce fusion-competence in macrophages that are destined to become osteoclasts are different from those that promote formation of foreign-body giant cells. For instance, transcription factor NFATC1 regulates genes that are specific to osteoclast differentiation.

Haploid cells

Zygote formation is a crucial step in sexual reproduction, and it is reliant on the fusion of sperm and egg cells. Consequently, these cells must be primed to gain fusion-competence. Phosphatidylserine is a phospholipid that usually resides on the inner layer of the cell membrane. After sperm cells are primed, phosphatidylserine can be found on the outer leaflet of the membrane. It is thought that this helps stabilize the membrane at the head of the sperm, and that it may play a role in allowing the sperm to enter the zona pellucida that covers egg cells. This unusual location of phosphatidylserine is an example of membrane restructuring during priming for cell fusion.

Chemotaxis

Chemotaxis is the process of recruitment in response to the presence of certain signal molecules. Cells that are destined to fuse are attracted to each other via chemotaxis. For example, sperm cells are attracted to the egg cell through signalling by progesterone. Similarly, in muscle tissue, myoblasts can be recruited for fusion by IL-4.

Membrane adhesion

Before cells can fuse, they must be in contact with one another. This can be accomplished through cell recognition and attachment by cellular machinery. Syncytin-1 is a Class I fusogen involved in the fusion of cells to form osteoclasts in humans. During the early actions of Class I fusogens in cell fusion, they insert their fusion loops into a target membrane. Consequently, the action of syncytin-1 is an example of membrane adhesion as it links the two cells together to prepare them for fusion. This step also encompasses the dehydration of the membranes at the site of fusion. This is necessary to overcome the energy requirements necessary for fusion and to ensure that the membranes are in very close proximity for fusion to occur.

Membrane fusion

Membrane fusion is characterized by the formation of a fusion pore, which allows the internal contents of both cells to mix. It is first accomplished by the mixing of lipids of the outer layers of the fusing membranes, which forms a hemifusion stalk. Then the inner leaflets can interact and fuse, creating an open gap where the membranes have fused. This gap is the fusion pore. This process is mediated by fusogens. Fusogens are highly conserved in mammals, and it is theorized that mammals adopted them after infection by retroviruses. Because they are highly conserved, they perform their task through a similar mechanism to the one used by viral fusogens as previously described. It is theorized that actin polymerization and other actions of the cytoskeleton might aid in the widening of the fusion pore to complete fusion.

Post-fusion resetting

Upon the completion of fusion, the machinery used to fuse must be disassembled or altered to avoid fusion of the new, multinucleated cell with more cells. One example of this is the final trimeric structure taken on by Class I, II, and III fusogens. They each take on a structure that is markedly different than their form before fusion occurred. This likely alters their activity, preventing them from initiating another fusion.

Fusion as a therapeutic target

Glycoproteins of some viruses, such as mammarenaviruses, can lose their fusion ability in presence of NMT inhibitors, this can be used as a therapeutic antiviral approach against hemorrhagic viruses such as lassa and junin in general public, in addition to LCMV, a fatal virus in immunocompromised patients.

Electronic quantum holography

From Wikipedia, the free encyclopedia

Electronic quantum holography (also known as quantum holographic data storage) is a holographic imagery and information storage technology based on the principles of electron holography. By recording both the amplitude and phase of electron waves through interference using a reference wave, electronic quantum holography can encode and read out data at high precision and density, storing as much as 35 bits per electron.

Electronic quantum holography differs from classical holography in discussing the fundamental principles of each technology. Typically, classical holography relies on optical coherence, using the interference between a reference beam and an object beam to record the phase (the position of the wave) and amplitude (the height of the wave) of light. Because this process depends on stable, first-order interference, classical holography requires coherent and well-aligned light sources. Additionally, the performance of classical holography can falter under unstable conditions such as mechanical vibrations, random phase fluctuations, or stray illumination.

By contrast, electronic quantum holography, and quantum holography itself, encode holographic information in the second-order coherence of entangled photon pairs rather than first-order coherence. Through the use of spatial-polarization hyper-entangled photons (photons that are linked in both their physical path and the direction of their light wave's vibration), quantum holography can reconstruct phase images through coincidence measurements even when illumination is incoherent or unpolarized. This allows for remote interference between photons that do not share overlapping paths, provides protection from noise and phase disorder, and can produce enhanced spatial resolution compared to classical holography.

History

Dennis Gabor Holography Model

While working with electron microscopy, Hungarian physicist Dennis Gabor recognized that image distortion caused by the spherical aberration of electron lenses limited resolution. To address this, he proposed a lens-less imaging method that used the wave nature of electrons to record and reconstruct the complete wavefront, both its amplitude and phase, resulting in what became known as a hologram. The practical application of electron holography emerged only later, as it required a more advanced understanding of electron interference and specialized instrumentation. Gabor's work in classical holography in 1948 would eventually lead to him winning a Nobel Prize in 1971.

In 1968, German physicists Gottfried Möllenstedt and Gerd Wahl found that Gabor's lens-less approach was not ideal for electron microscopy. They developed the method of image-plane off-axis holography, which became one of the most successful and widely used techniques in electron holography. Similarly, American electrical engineer Emmet Leith had conducted research on off-axis holography in the 1960's, and his work helped advance holography into popularity alongside Möllenstedt and Wahl's work.

The invention of digital holography emerged in the late 1960's, as J.W.Goodman, and American electrical engineer and physicist, proposed the idea of reconstructing an image of an object through electronically recording holograms. This breakthrough in digital holography grew in prominence with the development of charge-coupled devices, as the introduction of these devices enabled quantitative phase imaging, and the generation of digital image reconstructions.

As developments in digital holography continued, the field slowly began to see the incorporation of quantum mechanics. Developments involving consistent electron sources and digital image reconstruction allowed for scientists to retrieve the full wavefunction of the electron. This was one of the first bridges between digital and electronic quantum holography, as the reconstructed wavefront represents the quantum mechanical wavefunction of the electron beam instead of an optical analogue. Techniques based on the Aharonov-Bohm effect, which depend closely on the wavefunction phase were able to further demonstrate that holography could detect phase shifts stemming from electromagnetic potentials; even in areas that did not contain any electric or magnetic field. This set precedent for holography as a practical method for probing different quantum phenomena, such as gauge fields, magnetic flux, and microscopic electromagnetic structures.

As research entered the early 2000's, ultrafast electron microscopy and femtosecond-scale electron pulses allowed for time-resolved holography, enabling studies of rapid electron-wave dynamics. This would all eventually lay the foundation for quantum holography.

Early developments

Scanning Tunneling Microscope schematic

In 2009, Stanford University's Department of Physics set a new world record for the smallest writing using a scanning tunneling microscope and electron waves to write the initials "SU" at 0.3 nanometers, surpassing the previous record set by IBM in 1989 using xenon atoms. This achievement also set a record for the density of information. Before this technology was invented the density of information had not exceeded one bit per atom. Researchers of electronic quantum holography however were able to push the limit to 35 bits per electron or 20 bits nm−2.

Later, in 2019, Maden et al. explored a new holographic imaging technique using ultrafast transmission electron microscopy to visualize electromagnetic fields. They introduced both local and nonlocal holography techniques that improved time resolution, allowing researchers to measure the phase and group velocities of surface plasmon polaritons with high precision.[

In particular, the nonlocal approach allowed scientists to separate the reference and probe fields, which was a limitation in earlier optical approaches. This breakthrough would open the door to the possibility of studying quantum effects and collective excitations such as excitons, phonons, and polarizabilities at an atomic and sub femtosecond scale.

Recent advancements

In 2022, Töpfer et al. worked on developing techniques to capture holograms using photon pairs without directly capturing one of the photons. This method would be known as induced coherence without induced emission, and in it, researchers measure the interference of one photon to reconstruct the phase and amplitude of the undetected photon. This method proved to be a major step in improving the precision and practicality of electronic quantum holography imaging, as it improved phase stability and minimized the need for complex stabilization equipment.

In the following year, Yesharim et al. had extended holography into the quantum domain, with the development of quantum nonlinear holography. This would utilize nonlinear photonic crystals, whose patterned nonlinear coefficient shapes the spatial correlations of entangled photon pairs generated through spontaneous parametric down-conversion. Additionally, unlike typical nonlinear holography, which uses simulated optical processes, quantum nonlinear holography uses photon pairs that are generated by vacuum fluctuations, allowing the crystal structure to select specific signal-idler mode pairs while suppressing others. Using two-dimensional electric-field-poled KTP crystals (potassium titanyl phosphate crystals), the ability to directly imprint Hermite-Gauss mode patterns into the nonlinear medium was demonstrated, allowing for compact generation of spatially entangled qubits and qudits without the need for pump or beam shaping. The generated states exhibited high-fidelity correlations and violated the CHSH inequality.

This method minimizes the optical complexity typically required for high-dimensional quantum state engineering and is compatible with continuous-wave lasers and on-chip photonic integration. Further development using segmented and cascaded poling structures or future three-dimensional nonlinear photonic crystals, are expected to extend the range of available spatial modes and further tailor quantum state generation.

Recently, in 2025, research in electronic quantum holography has begun to move beyond photonic interferometers and electron-based methods towards programmable atomic systems that can directly manipulate quantum light. In a study published in Physical Review Research, Lloyd and Bekenstein demonstrate a form of quantum holography using a two-dimensional array of Rydberg atoms to construct a "quantum meta surface". This allowed them to control the phase and amplitude of a single photon with precision. Because they were able to control the states of the photon, researchers could generate a programmable holographic pattern in the quantum wavefunction of light, demonstrating the ability for information to be stored and projected at a quantum level. As such, this research provides a stepping stone to building scalable quantum imaging and information storing technology.

Technology

A copper chip is placed in a microscope and cleaned. Carbon monoxide molecules are then placed on the surface and moved around. When the electrons in copper interact with the carbon monoxide molecules, they create interference patterns that create an electronic quantum hologram. This hologram can be read like a stack of pages in a book, and can contain multiple images at different wavelengths.

In optical quantum holography, information is typically encoded using spatially entangled photon pairs created through spontaneous parametric down-conversion in nonlinear crystals. The paired photons exhibit strong correlations in position and momentum that can be measured in the image and Fourier planes of the optical system. A spatial light modulator applies a phase pattern to one of the protons, while the second photon passes through a compensating or reference path. The phase information does not appear in standard, raw intensity images. Instead, the information is accessed by computing second-order intensity correlations between symmetric detector pixels. Because the correlation function depends on the relative phase between the photons, it is possible for the hologram to be reconstructed even when only one photon interacts with the phase object.

Example of a CCD

Additionally, quantum holographic systems generally depend on high-sensitivity electron-multiplying CCD detectors that capture millions of frames in order to accumulate adequate coincidence statistics. In general, spatial resolution is determined by the correlation width of the wavefunction of the two photons, which in turn determines the smallest resolvable feature in the reconstructed phase map. The phase distortions introduced by birefringent components can be measured and compensated using spatial light modulator patterns in such a way that ensures consistent measurement bases across the detector field. In contrast to classical holography, which directly reads out diffraction patterns from intensity images, quantum holography retrieves analogous information from correlation matrices, which will allow for enhanced resolution and operation at lower light levels. Both effects originate from the use of entangled photons, whose second-order coherence properties allow holographic reconstruction beyond the cutoff of the classical diffraction.

Applications

Quantum holography using undetected light has potential in a wide variety of scientific and technological fields. Because the technique allows for holograms to be created without detecting the photon that illuminate the object, images can be created at wavelengths that would be otherwise difficult to measure. This has led to proposed usage in biomedical imaging. By probing an object with mid-infrared lights, which are useful for identifying biological tissue or chemical compositions, they can detect visible photons, which are easier to pick up on standard silicon-based image sensors. This approach is also viable beyond biomedical imaging, with proposed usage in materials analysis and environmental sensing, as this approach allows for a safer and more precise way to image samples that may get easily damaged through direct exposure to light.

Beyond the imaging and information storage applications of electronic quantum holography, holographic techniques have also been proposed for high-security applications. One way researchers have approached this is by creating "quantum holograms" through the usage of entangled photons on meta surfaces, enabling holographic letters. Their appearance will depend on polarization states, and will provide anti-counterfeiting and secure-communication functionalities.

In addition to these applications, electronic holographic techniques have demonstrated capabilities in material analysis at an atomic level. High-resolution electron holography enables the identification of individual atom columns in complex structures, such as a "dumbbell" structure. For example, gallium and arsenic columns in GaAs can be differentiated using phase shifts in the reconstructed electron wave, even if the atomic numbers are similar. Holography has also been applied to ferroelectric crystals, revealing local charge distributions and atomic dipoles that may be otherwise challenging to detect. Through combining precise phase measurements and high spatial resolution, researchers are able to study interfaces, nanodomains, and subtle atomic-scale distortions, providing detailed information on the structure and electronic properties of materials, and extending the use of holographic imaging beyond typical microscopy.

Low-energy electron holography reconstructs image of DNA

Within microscopy, new methods for imaging nanoscale structures have been developed through the use of precise phase patterns within nonlinear crystals to construct the spatial properties of photon pairs. These techniques will allow for medical imaging at a single-cell scale. To achieve this, the crystals encode spatial information provided by extremely weak optical signals into the quantum correlations of the photon pairs. Due to the hologram being imprinted during the nonlinear conversion process, the resultant light fields are able to maintain structural and phase details that typical microscopy may not. When combined with modulating optics and quantum state tomography, cell features can be reconstructed in a high-fidelity model without much photodamage, which provides an option for safely studying sensitive biological samples.

Emotional contagion

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Emotional_contagion   Emotiona...