Search This Blog

Thursday, April 27, 2017

Wave–particle duality

From Wikipedia, the free encyclopedia

Wave–particle duality is the concept that every elementary particle or quantic entity may be partly described in terms not only of particles, but also of waves. It expresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects. As Albert Einstein wrote: "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do."[1]

Through the work of Max Planck, Einstein, Louis de Broglie, Arthur Compton, Niels Bohr and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).[2] This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.[3]

Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved; see Interpretations of quantum mechanics.
Niels Bohr regarded the "duality paradox" as a fundamental or metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in respectively different physical settings. He saw such duality as one aspect of the concept of complementarity.[4] Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account.[5]

Werner Heisenberg considered the question further. He saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an entirely new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values (e.g. the electric and magnetic field strengths of Maxwell) are replaced by an entirely new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory.[6][7]

Brief history of wave and particle viewpoints

Democritus—the original atomist—argued that all things in the universe, including light, are composed of indivisible sub-components (light being some form of solar atom).[8] At the beginning of the 11th Century, the Arabic scientist Alhazen wrote the first comprehensive treatise on optics; describing refraction, reflection, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium ("plenum"). Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular hypothesis, arguing that the perfectly straight lines of reflection demonstrated light's particle nature; only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media (such as water and air), refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and was subsequently supported by Thomas Young's 1803 discovery of double-slit interference.[9][10] The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.[11]
Thomas Young's sketch of two-slit diffraction of waves, 1803

James Clerk Maxwell discovered that he could apply his equations for electromagnetism, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. The wave theory had prevailed—or at least it seemed to.

While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter. Antoine Lavoisier deduced the law of conservation of mass and categorized many new chemical elements and compounds; and Joseph Louis Proust advanced chemistry towards the atom by showing that elements combined in definite proportions. This led John Dalton to propose that elements were invisible sub components; Amedeo Avogadro discovered diatomic gases and completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry.
File:Wave-particle duality.ogvAnimation showing the wave-particle duality with a double slit experiment and effect of an observer. Increase size to see explanations in the video itself. 
 
Particle impacts make visible the interference pattern of waves.
A quantum particle is represented by a wave packet.
Interference of a quantum particle with itself.
Click images for animations.

Turn of the 20th century and the paradigm shift

Particles of electricity

At the close of the 19th century, the reductionism of atomic theory began to advance into the atom itself; determining, through physics, the nature of the atom and the operation of chemical reactions. Electricity, first thought to be a fluid, was now understood to consist of particles called electrons. This was first demonstrated by J. J. Thomson in 1897 when, using a cathode ray tube, he found that an electrical charge would travel across a vacuum (which would possess infinite resistance in classical theory). Since the vacuum offered no medium for an electric fluid to travel, this discovery could only be explained via a particle carrying a negative charge and moving through the vacuum. This electron flew in the face of classical electrodynamics, which had successfully treated electricity as a fluid for many years (leading to the invention of batteries, electric motors, dynamos, and arc lamps). More importantly, the intimate relation between electric charge and electromagnetism had been well documented following the discoveries of Michael Faraday and James Clerk Maxwell. Since electromagnetism was known to be a wave generated by a changing electric or magnetic field (a continuous, wave-like entity itself) an atomic/particle description of electricity and charge was a non sequitur. Furthermore, classical electrodynamics was not the only classical theory rendered incomplete.

Radiation quantization

In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. Einstein later proposed that electromagnetic radiation itself is quantized, not the energy of radiating atoms.

Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. That thermal objects emit light had been long known. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose: if each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe.

In 1900, Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). This was not an unsound proposal considering that macroscopic oscillators operate similarly: when studying five simple harmonic oscillators of equal amplitude but different frequency, the oscillator with the highest frequency possesses the highest energy (though this relationship is not linear like Planck's). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency.

The most revolutionary aspect of Planck's treatment of the black body is that it inherently relies on an integer number of oscillators in thermal equilibrium with the electromagnetic field. These oscillators give their entire energy to the electromagnetic field, creating a quantum of light, as often as they are excited by the electromagnetic field, absorbing a quantum of light and beginning to oscillate at the corresponding frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with an energy less than . However, once realizing that he had quantized the electromagnetic field, he denounced particles of light as a limitation of his approximation, not a property of reality.

Photoelectric effect illuminated

While Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most contemporary physicists agreed that Planck's "light quanta" represented only flaws in his model. A more-complete derivation of black body radiation would yield a fully continuous and 'wave-like' electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model to produce his solution to another outstanding problem of the day: the photoelectric effect, wherein electrons are emitted from atoms when they absorb energy from light. Since their discovery eight years previously, electrons had been studied in physics laboratories worldwide.

In 1902 Philipp Lenard discovered that the energy of these ejected electrons did not depend on the intensity of the incoming light, but instead on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. The more light there is, the more electrons are ejected. Whereas in order to get high energy electrons, one must illuminate the metal with high-frequency light. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature.[12]

If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum , then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan, who had previously determined the charge of the electron, produced experimental results in perfect accord with Einstein's predictions. While the energy of ejected electrons reflected Planck's constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect, of which a modern experiment can be performed in undergraduate-level labs.[13] This phenomenon could only be explained via photons, and not through any semi-classical theory (which could alternatively explain the photoelectric effect). When Einstein received his Nobel Prize in 1921, it was not for his more difficult and mathematically laborious special and general relativity, but for the simple, yet totally revolutionary, suggestion of quantized light. Einstein's "light quanta" would not be called photons until 1925, but even in 1905 they represented the quintessential example of wave-particle duality. Electromagnetic radiation propagates following linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously.

Einstein's explanation of the photoelectric effect

The photoelectric effect. Incoming photons on the left strike a metal plate (bottom), and eject electrons, depicted as flying off to the right.

In 1905, Albert Einstein provided an explanation of the photoelectric effect, a hitherto troubling experiment that the wave theory of light seemed incapable of explaining. He did so by postulating the existence of photons, quanta of light energy with particulate qualities.

In the photoelectric effect, it was observed that shining a light on certain metals would lead to an electric current in a circuit. Presumably, the light was knocking electrons out of the metal, causing current to flow. However, using the case of potassium as an example, it was also observed that while a dim blue light was enough to cause a current, even the strongest, brightest red light available with the technology of the time caused no current at all. According to the classical theory of light and matter, the strength or amplitude of a light wave was in proportion to its brightness: a bright light should have been easily strong enough to create a large current. Yet, oddly, this was not so.

Einstein explained this conundrum by postulating that the electrons can receive energy from electromagnetic field only in discrete portions (quanta that were called photons): an amount of energy E that was related to the frequency f of the light by
E=hf\,
where h is Planck's constant (6.626 × 10−34 J seconds). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light (using technology available at the time) below the threshold frequency could release an electron. To "violate" this law would require extremely high-intensity lasers which had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers.[14]

Einstein was awarded the Nobel Prize in Physics in 1921 for his discovery of the law of the photoelectric effect.

De Broglie's wavelength

Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform; there is no definite position of the particle. As the amplitude increases above zero the curvature decreases, so the amplitude decreases again, and vice versa—the result is an alternating amplitude: a wave. Top: Plane wave. Bottom: Wave packet.

In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter,[15][16] not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p):
\lambda ={\frac  {h}{p}}
This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = {\tfrac  {E}{c}} and the wavelength (in a vacuum) by λ = {\tfrac  {c}{f}}, where c is the speed of light in vacuum.

De Broglie's formula was confirmed three years later for electrons (which differ from photons in having a rest mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid.

De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work.

Heisenberg's uncertainty principle

In his work on formulating quantum mechanics, Werner Heisenberg postulated his uncertainty principle, which states:
\Delta x\Delta p\geq {\frac  {\hbar }{2}}
where
\Delta here indicates standard deviation, a measure of spread or uncertainty;
x and p are a particle's position and linear momentum respectively.
\hbar is the reduced Planck's constant (Planck's constant divided by 2\pi ).
Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. The thought is now, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made.

In fact, the modern explanation of the uncertainty principle, extending the Copenhagen interpretation first put forward by Bohr and Heisenberg, depends even more centrally on the wave nature of a particle: Just as it is nonsensical to discuss the precise location of a wave on a string, particles do not have perfectly precise positions; likewise, just as it is nonsensical to discuss the wavelength of a "pulse" wave traveling down a string, particles do not have perfectly precise momenta (which corresponds to the inverse of wavelength). Moreover, when position is relatively well defined, the wave is pulse-like and has a very ill-defined wavelength (and thus momentum). And conversely, when momentum (and thus wavelength) is relatively well defined, the wave looks long and sinusoidal, and therefore it has a very ill-defined position.

de Broglie–Bohm theory

Couder experiments,[17] "materializing" the pilot wave model.

De Broglie himself had proposed a pilot wave construct to explain the observed wave-particle duality. In this view, each particle has a well-defined position and momentum, but is guided by a wave function derived from Schrödinger's equation. The pilot wave theory was initially rejected because it generated non-local effects when applied to systems involving more than one particle. Non-locality, however, soon became established as an integral feature of quantum theory (see EPR paradox), and David Bohm extended de Broglie's model to explicitly include it.

In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics,[18] the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion is subject to a guiding equation or quantum potential. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored",[19] J.S.Bell.

The best illustration of the pilot-wave model was given by Couder's 2010 "walking droplets" experiments,[20] demonstrating the pilot-wave behaviour in a macroscopic mechanical analog.[17]

Wave behavior of large objects

Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929.[21] Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves. A wave is basically a group of particles which moves in a particular form of motion, i.e. to and fro. If we break that flow by an object it will convert into radiants.

A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality was conducted in the 1970s using the neutron interferometer.[22] Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before.

In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported.[23] Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength of the incident beam was about 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity.[24][25]

In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin[26]—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer.[27][28] In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms.[26] Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms.[29][30] In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer.[31] In 2013, the interference of molecules beyond 10,000 u has been demonstrated.[32]

Whether objects heavier than the Planck mass (about the weight of a large bacterium) have a de Broglie wavelength is theoretically unclear and experimentally unreachable; above the Planck mass a particle's Compton wavelength would be smaller than the Planck length and its own Schwarzschild radius, a scale at which current theories of physics may break down or need to be replaced by more general ones.[33]

Recently Couder, Fort, et al. showed[34] that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment,[35] unpredictable tunneling[36] (depending in complicated way on practically hidden state of field), orbit quantization[37] (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect.[38]

Treatment in modern quantum mechanics

Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation so have another wave.

The particle-like behavior is most evident due to phenomena associated with measurement in quantum mechanics. Upon measuring the location of the particle, the particle will be forced into a more localized state as given by the uncertainty principle. When viewed through this formalism, the measurement of the wave function will randomly "collapse", or rather "decohere", to a sharply peaked function at some location. For particles with mass the likelihood of detecting the particle at any particular location is equal to the squared amplitude of the wave function there. The measurement will return a well-defined position, (subject to uncertainty), a property traditionally associated with particles. It is important to note that a measurement is only a particular type of interaction where some data is recorded and the measured quantity is forced into a particular eigenstate. The act of measurement is therefore not fundamentally different from any other interaction.

Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena which previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results.

Visualization

There are two ways to visualize the wave-particle behaviour: by the "standard model", described below; and by the Broglie–Bohm model, where no duality is perceived.

Below is an illustration of wave–particle duality as it relates to De Broglie's hypothesis and Heisenberg's uncertainty principle (above), in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other.

The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread.

Conversely the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread.
Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity (%) of the particles corresponds to the probability density of finding the particle with position x or momentum component p.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.

Alternative views

Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community.

Both-particle-and-wave view

The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its "quantum potential") which will direct them to areas of constructive interference in preference to areas of destructive interference. This idea is held by a significant minority within the physics community.[39]

At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains:
When first discovered, particle diffraction was a source of great puzzlement. Are "particles" really "waves?" In the early experiments, the diffraction patterns were detected holistically by means of a photographic plate, which could not detect individual particles. As a result, the notion grew that particle and wave properties were mutually incompatible, or complementary, in the sense that different measurement apparatuses would be required to observe them. That idea, however, was only an unfortunate generalization from a technological limitation. Today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots (Tonomura et al., 1989). Evidently, quantum particles are indeed particles, but whose behaviour is very different from classical physics would have us to expect.
The Afshar experiment[40] (2007) may suggest that it is possible to simultaneously observe both wave and particle properties of photons. This claim is, however, disputed by other scientists.[41][42][43][44]

Wave-only view

At least one scientist proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Carver Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon:[45]
Mead has cut the Gordian knot of quantum complementarity. He claims that atoms, with their neutrons, protons, and electrons, are not particles at all but pure waves of matter. Mead cites as the gross evidence of the exclusively wave nature of both light and matter the discovery between 1933 and 1996 of ten examples of pure wave phenomena, including the ubiquitous laser of CD players, the self-propagating electrical currents of superconductors, and the Bose–Einstein condensate of atoms.
Albert Einstein, who, in his search for a Unified Field Theory, did not accept wave-particle duality, wrote:[46]
This double nature of radiation (and of material corpuscles)...has been interpreted by quantum-mechanics in an ingenious and amazingly successful fashion. This interpretation...appears to me as only a temporary way out...
The many-worlds interpretation (MWI) is sometimes presented as a waves-only theory, including by its originator, Hugh Everett who referred to MWI as "the wave interpretation".[47]

The Three Wave Hypothesis of R. Horodecki relates the particle to wave.[48][49] The hypothesis implies that a massive particle is an intrinsically spatially as well as temporally extended wave phenomenon by a nonlinear law.

Particle-only view

Still in the days of the old quantum theory, a pre-quantum-mechanical version of wave–particle duality was pioneered by William Duane,[50] and developed by others including Alfred Landé.[51] Duane explained diffraction of x-rays by a crystal in terms solely of their particle aspect. The deflection of the trajectory of each diffracted photon was explained as due to quantized momentum transfer from the spatially regular structure of the diffracting crystal.[52]

Neither-wave-nor-particle view

It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. For this reason, in 1928 Arthur Eddington[53] coined the name "wavicle" to describe the objects although it is not regularly used today. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Dirac delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states:[54]
"Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...]."

Relational approach to wave–particle duality

Relational quantum mechanics is developed which regards the detection event as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle and thus wave–particle duality is subsequently avoided.[55]

Applications

Although it is difficult to draw a line separating wave–particle duality from the rest of quantum mechanics, it is nevertheless possible to list some applications of this basic idea.
  • Wave–particle duality is exploited in electron microscopy, where the small wavelengths associated with the electron can be used to view objects much smaller than what is visible using visible light.
  • Similarly, neutron diffraction uses neutrons with a wavelength of about 0.1 nm, the typical spacing of atoms in a solid, to determine the structure of solids.
  • Photos are now able to show this dual nature, which may lead to new ways of examining and recording this behaviour.[56]

Monday, April 24, 2017

Ray Kurzweil

From Wikipedia, the free encyclopedia
Ray Kurzweil
Raymond Kurzweil Fantastic Voyage.jpg
Kurzweil on or prior to July 5, 2005
Born Raymond Kurzweil
February 12, 1948 (age 69)
Queens, New York City, U.S.
Nationality American
Alma mater Massachusetts Institute of Technology (B.S.)
Occupation Author, entrepreneur, futurist and inventor
Employer Google Inc.
Spouse(s) Sonya Rosenwald Fenster (1975–present)[1]
Awards Grace Murray Hopper Award (1978)
National Medal of Technology (1999)
Website Official website

Raymond "Ray" Kurzweil (/ˈkɜːrzwl/ KURZ-wyl; born February 12, 1948) is an American author, computer scientist, inventor and futurist. Aside from futurism, he is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements, and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.

Kurzweil was the principal inventor of the first charge-coupled device flatbed scanner,[2] the first omni-font optical character recognition,[2] the first print-to-speech reading machine for the blind,[3] the first commercial text-to-speech synthesizer,[4] the Kurzweil K250 music synthesizer capable of simulating the sound of the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.[5]

Kurzweil received the 1999 National Medal of Technology and Innovation, the United States' highest honor in technology, from President Clinton in a White House ceremony. He was the recipient of the $500,000 Lemelson-MIT Prize for 2001,[6] the world's largest for innovation. And in 2002 he was inducted into the National Inventors Hall of Fame, established by the U.S. Patent Office. He has received twenty-one honorary doctorates, and honors from three U.S. presidents. Kurzweil has been described as a "restless genius"[7] by The Wall Street Journal and "the ultimate thinking machine"[8] by Forbes. PBS included Kurzweil as one of 16 "revolutionaries who made America"[9] along with other inventors of the past two centuries. Inc. magazine ranked him #8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".[10]

Kurzweil has written seven books, five of which have been national bestsellers. The Age of Spiritual Machines has been translated into 9 languages and was the #1 best-selling book on Amazon in science. Kurzweil's book The Singularity Is Near was a New York Times bestseller, and has been the #1 book on Amazon in both science and philosophy. Kurzweil speaks widely to audiences both public and private and regularly delivers keynote speeches at industry conferences like DEMO, SXSW and TED. He maintains the news website KurzweilAI.net, which has over three million readers annually.[5]

Life, inventions, and business career

Early life

Ray Kurzweil grew up in the New York City borough of Queens. He was born to secular Jewish parents who had emigrated from Austria just before the onset of World War II. He was exposed via Unitarian Universalism to a diversity of religious faiths during his upbringing.[citation needed] His Unitarian church had the philosophy of many paths to the truth – the religious education consisted of spending six months on a single religion before moving onto the next.[citation needed] His father was a musician, a noted conductor, and a music educator. His mother was a visual artist.

Kurzweil decided he wanted to be an inventor at the age of five.[11] As a young boy, Kurzweil had an inventory of parts from various construction toys he’d been given and old electronic gadgets he’d collected from neighbors. In his youth, Kurzweil was an avid reader of science fiction literature. At the age of eight, nine, and ten, he read the entire Tom Swift Jr. series. At the age of seven or eight, he built a robotic puppet theater and robotic game. He was involved with computers by the age of twelve (in 1960), when only a dozen computers existed in all of New York City, and built computing devices and statistical programs for the predecessor of Head Start.[12] At the age of fourteen, Kurzweil wrote a paper detailing his theory of the neocortex.[13] His parents were involved with the arts, and he is quoted in the documentary Transcendent Man as saying that the household always produced discussions about the future and technology.

Kurzweil attended Martin Van Buren High School. During class, he often held onto his class textbooks to seemingly participate, but instead, focused on his own projects which were hidden behind the book. His uncle, an engineer at Bell Labs, taught young Kurzweil the basics of computer science.[14] In 1963, at age fifteen, he wrote his first computer program.[15] He created a pattern-recognition software program that analyzed the works of classical composers, and then synthesized its own songs in similar styles. In 1965, he was invited to appear on the CBS television program I've Got a Secret, where he performed a piano piece that was composed by a computer he also had built.[16] Later that year, he won first prize in the International Science Fair for the invention;[17] Kurzweil's submission to Westinghouse Talent Search of his first computer program alongside several other projects resulted in him being one of its national winners, which allowed him to be personally congratulated by President Lyndon B. Johnson during a White House ceremony. These activities collectively impressed upon Kurzweil the belief that nearly any problem could be overcome.[18]

Mid-life

While in high school, Kurzweil had corresponded with Marvin Minsky and was invited to visit him at MIT, which he did. Kurzweil also visited Frank Rosenblatt at Cornell.[19]

He obtained a B.S. in computer science and literature in 1970 at MIT. He went to MIT to study with Marvin Minsky. He took all of the computer programming courses (eight or nine) offered at MIT in the first year and a half.

In 1968, during his sophomore year at MIT, Kurzweil started a company that used a computer program to match high school students with colleges. The program, called the Select College Consulting Program, was designed by him and compared thousands of different criteria about each college with questionnaire answers submitted by each student applicant. Around this time, he sold the company to Harcourt, Brace & World for $100,000 (roughly $670,000 in 2013 dollars) plus royalties.[20]

In 1974, Kurzweil founded Kurzweil Computer Products, Inc. and led development of the first omni-font optical character recognition system, a computer program capable of recognizing text written in any normal font. Before that time, scanners had only been able to read text written in a few fonts. He decided that the best application of this technology would be to create a reading machine, which would allow blind people to understand written text by having a computer read it to them aloud. However, this device required the invention of two enabling technologies—the CCD flatbed scanner and the text-to-speech synthesizer. Development of these technologies was completed at other institutions such as Bell Labs, and on January 13, 1976, the finished product was unveiled during a news conference headed by him and the leaders of the National Federation of the Blind. Called the Kurzweil Reading Machine, the device covered an entire tabletop.

Kurzweil's next major business venture began in 1978, when Kurzweil Computer Products began selling a commercial version of the optical character recognition computer program. LexisNexis was one of the first customers, and bought the program to upload paper legal and news documents onto its nascent online databases.

Kurzweil sold his Kurzweil Computer Products to Lernout & Hauspie. Following the legal and bankruptcy problems of the latter, the system became a subsidiary of Xerox later known as Scansoft and now as Nuance Communications, and he functioned as a consultant for the former until 1995.

Kurzweil's next business venture was in the realm of electronic music technology. After a 1982 meeting with Stevie Wonder, in which the latter lamented the divide in capabilities and qualities between electronic synthesizers and traditional musical instruments, Kurzweil was inspired to create a new generation of music synthesizers capable of accurately duplicating the sounds of real instruments. Kurzweil Music Systems was founded in the same year, and in 1984, the Kurzweil K250 was unveiled. The machine was capable of imitating a number of instruments, and in tests musicians were unable to discern the difference between the Kurzweil K250 on piano mode from a normal grand piano.[21] The recording and mixing abilities of the machine, coupled with its abilities to imitate different instruments, made it possible for a single user to compose and play an entire orchestral piece.

Kurzweil Music Systems was sold to South Korean musical instrument manufacturer Young Chang in 1990. As with Xerox, Kurzweil remained as a consultant for several years. Hyundai acquired Young Chang in 2006 and in January 2007 appointed Raymond Kurzweil as Chief Strategy Officer of Kurzweil Music Systems.[22]

Later life

Concurrent with Kurzweil Music Systems, Kurzweil created the company Kurzweil Applied Intelligence (KAI) to develop computer speech recognition systems for commercial use. The first product, which debuted in 1987, was an early speech recognition program.

Kurzweil started Kurzweil Educational Systems in 1996 to develop new pattern-recognition-based computer technologies to help people with disabilities such as blindness, dyslexia and attention-deficit hyperactivity disorder (ADHD) in school. Products include the Kurzweil 1000 text-to-speech converter software program, which enables a computer to read electronic and scanned text aloud to blind or visually impaired users, and the Kurzweil 3000 program, which is a multifaceted electronic learning system that helps with reading, writing, and study skills.
Raymond Kurzweil at the Singularity Summit at Stanford University in 2006

During the 1990s, Kurzweil founded the Medical Learning Company.[23] The company's products included an interactive computer education program for doctors and a computer-simulated patient. Around the same time, Kurzweil started KurzweilCyberArt.com—a website featuring computer programs to assist the creative art process. The site used to offer free downloads of a program called AARON—a visual art synthesizer developed by Harold Cohen—and of "Kurzweil's Cybernetic Poet", which automatically creates poetry. During this period he also started KurzweilAI.net, a website devoted towards showcasing news of scientific developments, publicizing the ideas of high-tech thinkers and critics alike, and promoting futurist-related discussion among the general population through the Mind-X forum.

In 1999, Kurzweil created a hedge fund called "FatKat" (Financial Accelerating Transactions from Kurzweil Adaptive Technologies), which began trading in 2006. He has stated that the ultimate aim is to improve the performance of FatKat's A.I. investment software program, enhancing its ability to recognize patterns in "currency fluctuations and stock-ownership trends."[24] He predicted in his 1999 book, The Age of Spiritual Machines, that computers will one day prove superior to the best human financial minds at making profitable investment decisions. In June 2005, Kurzweil introduced the "Kurzweil-National Federation of the Blind Reader" (K-NFB Reader)—a pocket-sized device consisting of a digital camera and computer unit. Like the Kurzweil Reading Machine of almost 30 years before, the K-NFB Reader is designed to aid blind people by reading written text aloud. The newer machine is portable and scans text through digital camera images, while the older machine is large and scans text through flatbed scanning.

In December 2012, Kurzweil was hired by Google in a full-time position to "work on new projects involving machine learning and language processing".[25] He was personally hired by Google co-founder Larry Page.[26] Larry Page and Kurzweil agreed on a one-sentence job description: "to bring natural language understanding to Google".[27]

He received a Technical Grammy on February 8, 2015, recognizing his diverse technical and creative accomplishments. For purposes of the Grammy, perhaps most notable was the aforementioned Kurzweil K250.[28]

Postmortem life

Kurzweil has joined the Alcor Life Extension Foundation, a cryonics company. In the event of his declared death, Kurzweil plans to be perfused with cryoprotectants, vitrified in liquid nitrogen, and stored at an Alcor facility in the hope that future medical technology will be able to repair his tissues and revive him.[29]

Personal life

Kurzweil is agnostic about the existence of a soul.[30] On the possibility of divine intelligence, Kurzweil is quoted as saying, "Does God exist? I would say, 'Not yet.'"[31]

Kurzweil married Sonya Rosenwald Fenster in 1975 and has two children.[32] Sonya Kurzweil is a psychologist in private practice and clinical instructor in Psychology at Harvard Medical School; she is interested in the way that digital media can be integrated into the lives of children and teens.[33]

He has a son, Ethan Kurzweil, who is a venture capitalist,[34] and a daughter, Amy Kurzweil,[35] who is a writer and cartoonist.

Ray Kurzweil is a cousin of writer Allen Kurzweil.

Creative approach

Kurzweil said "I realize that most inventions fail not because the R&D department can’t get them to work, but because the timing is wrong‍—‌not all of the enabling factors are at play where they are needed. Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment."[36][37]

For the past several decades, Kurzweil's most effective and common approach to doing creative work has been conducted during his lucid dreamlike state which immediately precedes his awakening state. He claims to have constructed inventions, solved difficult problems, such as algorithmic, business strategy, organizational, and interpersonal problems, and written speeches in this state.[19]

Books

Kurzweil's first book, The Age of Intelligent Machines, was published in 1990. The nonfiction work discusses the history of computer artificial intelligence (AI) and forecasts future developments. Other experts in the field of AI contribute heavily to the work in the form of essays. The Association of American Publishers' awarded it the status of Most Outstanding Computer Science Book of 1990.[38]

In 1993, Kurzweil published a book on nutrition called The 10% Solution for a Healthy Life. The book's main idea is that high levels of fat intake are the cause of many health disorders common in the U.S., and thus that cutting fat consumption down to 10% of the total calories consumed would be optimal for most people.

In 1999, Kurzweil published The Age of Spiritual Machines, which further elucidates his theories regarding the future of technology, which themselves stem from his analysis of long-term trends in biological and technological evolution. Much emphasis is on the likely course of AI development, along with the future of computer architecture.

Kurzweil's next book, published in 2004, returned to human health and nutrition. Fantastic Voyage: Live Long Enough to Live Forever was co-authored by Terry Grossman, a medical doctor and specialist in alternative medicine.

The Singularity Is Near, published in 2006, was made into a movie starring Pauley Perrette from NCIS. In February 2007, Ptolemaic Productions acquired the rights to The Singularity is Near, The Age of Spiritual Machines and Fantastic Voyage including the rights to film Kurzweil's life and ideas for the documentary film Transcendent Man, which was directed by Barry Ptolemy.

Transcend: Nine Steps to Living Well Forever,[39] a follow-up to Fantastic Voyage, was released on April 28, 2009.

Kurzweil's book, How to Create a Mind: The Secret of Human Thought Revealed, was released on Nov. 13, 2012.[40] In it Kurzweil describes his Pattern Recognition Theory of Mind, the theory that the neocortex is a hierarchical system of pattern recognizers, and argues that emulating this architecture in machines could lead to an artificial superintelligence.[41]

Movies

Kurzweil wrote and co-produced a movie directed by Anthony Waller, called The Singularity Is Near: A True Story About the Future, in 2010 based, in part, on his 2005 book The Singularity Is Near. Part fiction, part non-fiction, he interviews 20 big thinkers like Marvin Minsky, plus there is a B-line narrative story that illustrates some of the ideas, where a computer avatar (Ramona) saves the world from self-replicating microscopic robots. In addition to his movie, an independent, feature-length documentary was made about Kurzweil, his life, and his ideas, called Transcendent Man. Filmmakers Barry Ptolemy and Felicia Ptolemy followed Kurzweil, documenting his global speaking-tour. Premiered in 2009 at the Tribeca Film Festival, Transcendent Man documents Kurzweil's quest to reveal mankind's ultimate destiny and explores many of the ideas found in his New York Times bestselling book, The Singularity Is Near, including his concept exponential growth, radical life expansion, and how we will transcend our biology. The Ptolemys documented Kurzweil's stated goal of bringing back his late father using AI. The film also features critics who argue against Kurzweil's predictions.

In 2010, an independent documentary film called Plug & Pray premiered at the Seattle International Film Festival, in which Kurzweil and one of his major critics, the late Joseph Weizenbaum, argue about the benefits of eternal life.

The feature-length documentary film The Singularity by independent filmmaker Doug Wolens (released at the end of 2012), showcasing Kurzweil, has been acclaimed as "a large-scale achievement in its documentation of futurist and counter-futurist ideas” and “the best documentary on the Singularity to date."[42]

Kurzweil frequently comments on the application of cell-size nanotechnology to the workings of the human brain and how this could be applied to building AI. While being interviewed for a February 2009 issue of Rolling Stone magazine, Kurzweil expressed a desire to construct a genetic copy of his late father, Fredric Kurzweil, from DNA within his grave site. This feat would be achieved by exhumation and extraction of DNA, constructing a clone of Fredric and retrieving memories and recollections—from Ray's mind—of his father. Kurzweil kept all of his father's records, notes, and pictures in order to maintain as much of his father as he could. Ray is known for taking over 200 pills a day, meant to reprogram his biochemistry. This, according to Ray, is only a precursor to the devices at the nano scale that will eventually replace a blood-cell, self updating of specific pathogens to improve the immune system.

Views

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines, Kurzweil proposed "The Law of Accelerating Returns", according to which the rate of change in a wide variety of evolutionary systems (including the growth of technologies) tends to increase exponentially.[43] He gave further focus to this issue in a 2001 essay entitled "The Law of Accelerating Returns", which proposed an extension of Moore's law to a wide variety of technologies, and used this to argue in favor of Vernor Vinge's concept of a technological singularity.[44] Kurzweil suggests that this exponential technological growth is counter-intuitive to the way our brains perceive the world—since our brains were biologically inherited from humans living in a world that was linear and local—and, as a consequence, he claims it has encouraged great skepticism in his future projections.

Stance on the future of genetics, nanotechnology, and robotics

Kurzweil is working with the Army Science Board to develop a rapid response system to deal with the possible abuse of biotechnology. He suggests that the same technologies that are empowering us to reprogram biology away from cancer and heart disease could be used by a bioterrorist to reprogram a biological virus to be more deadly, communicable, and stealthy. However, he suggests that we have the scientific tools to successfully defend against these attacks, similar to the way we defend against computer software viruses. He has testified before Congress on the subject of nanotechnology, advocating that nanotechnology has the potential to solve serious global problems such as poverty, disease, and climate change. "Nanotech Could Give Global Warming a Big Chill".[45]

In media appearances, Kurzweil has stressed the extreme potential dangers of nanotechnology[16] but argues that in practice, progress cannot be stopped because that would require a totalitarian system, and any attempt to do so would drive dangerous technologies underground and deprive responsible scientists of the tools needed for defense. He suggests that the proper place of regulation is to ensure that technological progress proceeds safely and quickly, but does not deprive the world of profound benefits. He stated, "To avoid dangers such as unrestrained nanobot replication, we need relinquishment at the right level and to place our highest priority on the continuing advance of defensive technologies, staying ahead of destructive technologies. An overall strategy should include a streamlined regulatory process, a global program of monitoring for unknown or evolving biological pathogens, temporary moratoriums, raising public awareness, international cooperation, software reconnaissance, and fostering values of liberty, tolerance, and respect for knowledge and diversity."[46]

Health and aging

Kurzweil admits that he cared little for his health until age 35, when he was found to suffer from a glucose intolerance, an early form of type II diabetes (a major risk factor for heart disease). Kurzweil then found a doctor (Terry Grossman, M.D.) who shares his somewhat unconventional beliefs to develop an extreme regimen involving hundreds of pills, chemical intravenous treatments, red wine, and various other methods to attempt to live longer. Kurzweil was ingesting "250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea" every day and drinking several glasses of red wine a week in an effort to "reprogram" his biochemistry.[47] Lately, he has cut down the number of supplement pills to 150.[30]

Kurzweil has made a number of bold claims for his health regimen. In his book The Singularity Is Near, he claimed that he brought his cholesterol level down from the high 200s to 130, raised his HDL (high-density lipoprotein) from below 30 to 55, and lowered his homocysteine from an unhealthy 11 to a much safer 6.2. He also claimed that his C-reactive protein "and all of my other indexes (for heart disease, diabetes, and other conditions) are at ideal levels." He further claimed that his health regimen, including dramatically reducing his fat intake, successfully "reversed" his type 2 diabetes. (The Singularity Is Near, p. 211)

He has written three books on the subjects of nutrition, health, and immortality: The 10% Solution for a Healthy Life, Fantastic Voyage: Live Long Enough to Live Forever and Transcend: Nine Steps to Living Well Forever. In all, he recommends that other people emulate his health practices to the best of their abilities. Kurzweil and his current "anti-aging" doctor, Terry Grossman, now have two websites promoting their first and second book.

Kurzweil asserts that in the future, everyone will live forever.[48] In a 2013 interview, he said that in 15 years, medical technology could add more than a year to one's remaining life expectancy for each year that passes, and we could then "outrun our own deaths". Among other things, he has supported the SENS Research Foundation's approach to finding a way to repair aging damage, and has encouraged the general public to hasten their research by donating.[27][49]

Nassim Nicholas Taleb, Lebanese American essayist, scholar and statistician, criticized Kurzweil's approach of taking multiple pills to achieve longevity in his book Antifragile.[50]

Kurzweil's view of the human neocortex

According to Kurzweil, technologists will be creating synthetic neocortexes based on the operating principles of the human neocortex with the primary purpose of extending our own neocortexes. He claims to believe that the neocortex of an adult human consists of approximately 300 million pattern recognizers. He draws on the commonly accepted belief that the primary anatomical difference between humans and other primates that allowed for superior intellectual abilities was the evolution of a larger neocortex. He claims that the six-layered neocortex deals with increasing abstraction from one layer to the next. He says that at the low levels, the neocortex may seem cold and mechanical because it can only make simple decisions, but at the higher levels of the hierarchy, the neocortex is likely to be dealing with concepts like being funny, being sexy, expressing a loving sentiment, creating a poem or understanding a poem, etc. He claims to believe that these higher levels of the human neocortex were the enabling factors to permit the human development of language, technology, art, and science. He stated, "If the quantitative improvement from primates to humans with the big forehead was the enabling factor to allow for language, technology, art, and science, what kind of qualitative leap can we make with another quantitative increase? Why not go from 300 million pattern recognizers to a billion?”

Encouraging futurism and transhumanism

Kurzweil's standing as a futurist and transhumanist has led to his involvement in several singularity-themed organizations. In December 2004, Kurzweil joined the advisory board of the Machine Intelligence Research Institute.[51] In October 2005, Kurzweil joined the scientific advisory board of the Lifeboat Foundation.[52] On May 13, 2006, Kurzweil was the first speaker at the Singularity Summit at Stanford University in Palo Alto, California.[53] In May 2013, Kurzweil was the keynote speaker at the 2013 proceeding of the Research, Innovation, Start-up and Employment (RISE) international conference in Seoul, Korea Republic.

In February 2009, Kurzweil, in collaboration with Google and the NASA Ames Research Center in Mountain View, California, announced the creation of the Singularity University training center for corporate executives and government officials. The University's self-described mission is to "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity's grand challenges". Using Vernor Vinge's Singularity concept as a foundation, the university offered its first nine-week graduate program to 40 students in June, 2009.

Predictions

Past predictions

Kurzweil's first book, The Age of Intelligent Machines, presented his ideas about the future. It was written from 1986 to 1989 and published in 1990. Building on Ithiel de Sola Pool's "Technologies of Freedom" (1983), Kurzweil claims to have forecast the dissolution of the Soviet Union due to new technologies such as cellular phones and fax machines disempowering authoritarian governments by removing state control over the flow of information.[54] In the book, Kurzweil also extrapolated preexisting trends in the improvement of computer chess software performance to predict that computers would beat the best human players "by the year 2000".[55] In May 1997, chess World Champion Garry Kasparov was defeated by IBM's Deep Blue computer in a well-publicized chess match.[56]

Perhaps most significantly, Kurzweil foresaw the explosive growth in worldwide Internet use that began in the 1990s. At the time of the publication of The Age of Intelligent Machines, there were only 2.6 million Internet users in the world,[57] and the medium was unreliable, difficult to use, and deficient in content. He also stated that the Internet would explode not only in the number of users but in content as well, eventually granting users access "to international networks of libraries, data bases, and information services". Additionally, Kurzweil claims to have correctly foreseen that the preferred mode of Internet access would inevitably be through wireless systems, and he was also correct to estimate that the latter would become practical for widespread use in the early 21st century.

In October 2010, Kurzweil released his report, "How My Predictions Are Faring" in PDF format,[58] which analyzes the predictions he made in his book The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005). Of the 147 total predictions, Kurzweil claims that 115 were 'entirely correct', 12 were "essentially correct", and 17 were "partially correct", and only 3 were "wrong". Adding together the "entirely" and "essentially" correct, Kurzweil's claimed accuracy rate comes to 86%.

Daniel Lyons, writing in Newsweek magazine, criticized Kurzweil for some of his predictions that turned out to be wrong, such as the economy continuing to boom from the 1998 dot-com through 2009, a US company having a market capitalization of more than $1 trillion, a supercomputer achieving 20 petaflops, speech recognition being in widespread use and cars that would drive themselves using sensors installed in highways; all by 2009.[59] To the charge that a 20 petaflop supercomputer was not produced in the time he predicted, Kurzweil responded that he considers Google a giant supercomputer, and that it is indeed capable of 20 petaflops.[59]

Kurzweil's predictions for 2009 were mostly inaccurate, claims Forbes magazine. For example, Kurzweil predicted, "The majority of text is created using continuous speech recognition." This is not the case.[60]

Future predictions

In 1999, Kurzweil published a second book titled The Age of Spiritual Machines, which goes into more depth explaining his futurist ideas. The third and final part of the book is devoted to predictions over the coming century, from 2009 through 2099. In The Singularity Is Near he makes fewer concrete short-term predictions, but includes many longer-term visions.

He states that with radical life extension will come radical life enhancement. He says he is confident that within 10 years we will have the option to spend some of our time in 3D virtual environments that appear just as real as real reality, but these will not yet be made possible via direct interaction with our nervous system. "If you look at video games and how we went from pong to the virtual reality we have available today, it is highly likely that immortality in essence will be possible." He believes that 20 to 25 years from now, we will have millions of blood-cell sized devices, known as nanobots, inside our bodies fighting against diseases, improving our memory, and cognitive abilities. Kurzweil says that a machine will pass the Turing test by 2029, and that around 2045, "the pace of change will be so astonishingly quick that we won't be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating". Kurzweil states that humans will be a hybrid of biological and non-biological intelligence that becomes increasingly dominated by its non-biological component. He stresses that "AI is not an intelligent invasion from Mars. These are brain extenders that we have created to expand our own mental reach. They are part of our civilization. They are part of who we are. So over the next few decades our human-machine civilization will become increasingly dominated by its non-biological component. In Transcendent Man Kurzweil states "We humans are going to start linking with each other and become a metaconnection we will all be connected and all be omnipresent, plugged into this global network that is connected to billions of people, and filled with data." [61] Kurzweil states in a press conference that we are the only species that goes beyond our limitations- "we didn't stay in the caves, we didn't stay on the planet, and we're not going to stay with the limitations of our biology". In his singularity based documentary he is quoted saying "I think people are fooling themselves when they say they have accepted death".

In 2008, Kurzweil said in an expert panel in the National Academy of Engineering that solar power will scale up to produce all the energy needs of Earth's people in 20 years. According to Kurzweil, we only need to capture 1 part in 10,000 of the energy from the Sun that hits Earth's surface to meet all of humanity's energy needs.[62]

Reception

Praise

Kurzweil was referred to as "the ultimate thinking machine" by Forbes[8] and as a "restless genius"[7] by The Wall Street Journal. PBS included Kurzweil as one of 16 "revolutionaries who made America"[9] along with other inventors of the past two centuries. Inc. magazine ranked him #8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".[10]

Criticism

Although the idea of a technological singularity is a popular concept in science fiction, some authors such as Neal Stephenson[63] and Bruce Sterling have voiced skepticism about its real-world plausibility. Sterling expressed his views on the singularity scenario in a talk at the Long Now Foundation entitled The Singularity: Your Future as a Black Hole.[64][65] Other prominent AI thinkers and computer scientists such as Daniel Dennett,[66] Rodney Brooks,[67] David Gelernter[68] and Paul Allen[69] also criticized Kurzweil's projections.

In the cover article of the December 2010 issue of IEEE Spectrum, John Rennie criticizes Kurzweil for several predictions that failed to become manifest by the originally predicted date. "Therein lie the frustrations of Kurzweil's brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable."[70]

Bill Joy, cofounder of Sun Microsystems, agrees with Kurzweil's timeline of future progress, but thinks that technologies such as AI, nanotechnology and advanced biotechnology will create a dystopian world.[71] Mitch Kapor, the founder of Lotus Development Corporation, has called the notion of a technological singularity "intelligent design for the IQ 140 people...This proposition that we're heading to this point at which everything is going to be just unimaginably different—it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me."[24]

Some critics have argued more strongly against Kurzweil and his ideas. Cognitive scientist Douglas Hofstadter has said of Kurzweil's and Hans Moravec's books: "It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid."[72] Biologist P. Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology.[73][74] VR pioneer Jaron Lanier has even described Kurzweil's ideas as "cybernetic totalism" and has outlined his views on the culture surrounding Kurzweil's predictions in an essay for Edge.org entitled One Half of a Manifesto.[42][75]

British philosopher John Gray argues that contemporary science is what magic was for ancient civilizations. It gives a sense of hope for those who are willing to do almost anything in order to achieve eternal life. He quotes Kurzweil's Singularity as another example of a trend which has almost always been present in the history of mankind.[76]

The Brain Makers, a history of artificial intelligence written in 1994 by HP Newquist, noted that "Born with the same gift for self-promotion that was a character trait of people like P.T. Barnum and Ed Feigenbaum, Kurzweil had no problems talking up his technical prowess . . . Ray Kurzweil was not noted for his understatement." [77]

In a 2015 paper, William D. Nordhaus of Yale University, takes an economic look at the impacts of an impending technological singularity. He comments "There is remarkably little writing on Singularity in the modern macroeconomic literature." [78] Nordhaus supposes that the Singularity could arise from either the demand or supply side of a market economy, but for information technology to proceed at the kind of pace Kurzweil suggests, there would have to be significant productivity trade-offs. Namely, in order to devote more resources to producing super computers we must decrease our production of non-information technology goods. Using a variety of econometric methods, Nordhaus runs six supply side tests and one demand side test to track the macroeconomic viability of such steep rises in information technology output. Of the seven tests only two indicated that a Singularity was economically possible and both of those two predicted, at minimum, 100 years before it would occur.

Awards and honors

  • First place in the 1965 International Science Fair[17] for inventing the classical music synthesizing computer.
  • The 1978 Grace Murray Hopper Award from the Association for Computing Machinery. The award is given annually to one "outstanding young computer professional" and is accompanied by a $35,000 prize.[79] Kurzweil won it for his invention of the Kurzweil Reading Machine.[80]
  • In 1986, Kurzweil was named Honorary Chairman for Innovation of the White House Conference on Small Business by President Reagan.
  • In 1988, Kurzweil was named Inventor of the Year by MIT and the Boston Museum of Science.[81]
  • In 1990, Kurzweil was voted Engineer of the Year by the over one million readers of Design News Magazine and received their third annual Technology Achievement Award.[81][82]
  • The 1994 Dickson Prize in Science. One is awarded every year by Carnegie Mellon University to individuals who have "notably advanced the field of science." Both a medal and a $50,000 prize are presented to winners.[83]
  • The 1998 "Inventor of the Year" award from the Massachusetts Institute of Technology.[84]
  • The 1999 National Medal of Technology.[85] This is the highest award the President of the United States can bestow upon individuals and groups for pioneering new technologies, and the President dispenses the award at his discretion.[86] Bill Clinton presented Kurzweil with the National Medal of Technology during a White House ceremony in recognition of Kurzweil's development of computer-based technologies to help the disabled.
  • The 2000 Telluride Tech Festival Award of Technology.[87] Two other individuals also received the same honor that year. The award is presented yearly to people who "exemplify the life, times and standard of contribution of Tesla, Westinghouse and Nunn."
  • The 2001 Lemelson-MIT Prize for a lifetime of developing technologies to help the disabled and to enrich the arts.[88] Only one is awarded each year – it is given to highly successful, mid-career inventors. A $500,000 award accompanies the prize.[89]
  • Kurzweil was inducted into the National Inventors Hall of Fame in 2002 for inventing the Kurzweil Reading Machine.[90] The organization "honors the women and men responsible for the great technological advances that make human, social and economic progress possible."[91] Fifteen other people were inducted into the Hall of Fame the same year.[92]
  • The Arthur C. Clarke Lifetime Achievement Award on April 20, 2009 for lifetime achievement as an inventor and futurist in computer-based technologies.[93]
  • In 2011, Kurzweil was named a Senior Fellow of the Design Futures Council.[94]
  • In 2013, Kurzweil was honored as a Silicon Valley Visionary Award winner on June 26 by SVForum.[95]
  • In 2014, Kurzweil was honored with the American Visionary Art Museum’s Grand Visionary Award on January 30.[96][97][98]
  • Kurzweil has received 20 honorary doctorates in science, engineering, music and humane letters from Rensselaer Polytechnic Institute, Hofstra University and other leading colleges and universities, as well as honors from three U.S. presidents – Clinton, Reagan and Johnson.[5][99]
  • Kurzweil has received seven national and international film awards including the CINE Golden Eagle Award and the Gold Medal for Science Education from the International Film and TV Festival of New York.[81]

Fearmongering

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Fearmongering Fearmongering ,...