Search This Blog

Friday, July 27, 2018

Einstein's thought experiments

From Wikipedia, the free encyclopedia
A hallmark of Albert Einstein's career was his use of visualized thought experiments (German: Gedankenexperiment) as a fundamental tool for understanding physical issues and for elucidating his concepts to others. Einstein's thought experiments took diverse forms. In his youth, he mentally chased beams of light. For special relativity, he employed moving trains and flashes of lightning to explain his most penetrating insights. For general relativity, he considered a person falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his debates with Niels Bohr on the nature of reality, he proposed imaginary devices intended to show, at least in concept, how the Heisenberg uncertainty principle might be evaded. In a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement.

Introduction

A thought experiment is a logical argument or mental model cast within the context of an imaginary (hypothetical or even counterfactual) scenario. A scientific thought experiment, in particular, may examine the implications of a theory, law, or set of principles with the aid of fictive and/or natural particulars (demons sorting molecules, cats whose lives hinge upon a radioactive disintegration, men in enclosed elevators) in an idealized environment (massless trapdoors, absence of friction). They describe experiments that, except for some specific and necessary idealizations, could conceivably be performed in the real world.[2]

As opposed to physical experiments, thought experiments do not report new empirical data. They can only provide conclusions based on deductive or inductive reasoning from their starting assumptions. Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. It is the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument."[3]

When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their intuitions to their understanding of a scenario.[4] Thought experiments have a long history. Perhaps the best known in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This has sometimes been taken to be an actual physical demonstration, involving his climbing up the Leaning Tower of Pisa and dropping two heavy weights off it. In fact, it was a logical demonstration described by Galileo in Discorsi e dimostrazioni matematiche (1638).[5]

Einstein had a highly visual understanding of physics. His work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." These aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of Lorentz or Maxwell. This included his use of thought experiments.[6]:26–27;121–127

Special relativity

Pursuing a beam of light

Late in life, Einstein recalled
...a paradox upon which I had already hit at the age of sixteen: If I pursue a beam of light with the velocity c (velocity of light in a vacuum), I should observe such a beam of light as an electromagnetic field at rest though spatially oscillating. There seems to be no such thing, however, neither on the basis of experience nor according to Maxwell's equations. From the very beginning it appeared to me intuitively clear that, judged from the standpoint of such an observer, everything would have to happen according to the same laws as for an observer who, relative to the earth, was at rest. For how should the first observer know or be able to determine, that he is in a state of fast uniform motion? One sees in this paradox the germ of the special relativity theory is already contained.[p 1]:52–53

Einstein's thought experiment as a 16 year old student

Einstein's recollections of his youthful musings are widely cited because of the hints they provide of his later great discovery. However, Norton has noted that Einstein's reminiscences were probably colored by a half-century of hindsight. Norton lists several problems with Einstein's recounting, both historical and scientific:[7]
1. At 16 years old and a student at the Gymnasium in Aarau, Einstein would have had the thought experiment in late 1895 to early 1896. But various sources note that Einstein did not learn Maxwell's theory until 1898, in university.[7][8]

2. The second issue is that a 19th century aether theorist would have had no difficulties with the thought experiment. Einstein's statement, "...there seems to be no such thing...on the basis of experience," would not have counted as an objection, but would have represented a mere statement of fact, since no one had ever traveled at such speeds.

3. An aether theorist would have regarded "...nor according to Maxwell's equations" as simply representing a misunderstanding on Einstein's part. Unfettered by any notion that the speed of light represents a cosmic limit, the aether theorist would simply have set velocity equal to c, noted that yes indeed, the light would appear to be frozen, and then thought no more of it.[7]
Rather than the thought experiment being at all incompatible with aether theories (which it is not), the youthful Einstein appears to have reacted to the scenario out of an intuitive sense of wrongness. He felt that the laws of optics should obey the principle of relativity. As he grew older, his early thought experiment acquired deeper levels of significance: Einstein felt that Maxwell's equations should be the same for all observers in inertial motion. From Maxwell's equations, one can deduce a single speed of light, and there is nothing in this computation that depends on an observer's speed. Einstein sensed a conflict between Newtonian mechanics and the constant speed of light determined by Maxwell's equations.[6]:114–115

Regardless of the historical and scientific issues described above, Einstein's early thought experiment was part of the repertoire of test cases that he used to check on the viability of physical theories. Norton suggests that the real importance of the thought experiment was that it provided a powerful objection to emission theories of light, which Einstein had worked on for several years prior to 1905.[7][8][9]

Magnet and conductor

In the very first paragraph of Einstein's seminal 1905 work introducing special relativity, he writes:
It is known that the application of Maxwell's electrodynamics, as ordinarily conceived at the present time, to moving bodies, leads to asymmetries which don't seem to be connected with the phenomena. Let us, for example, think of the mutual action between a magnet and a conductor. The observed phenomenon in this case depends only on the relative motion of the conductor and the magnet, while according to the usual conception, a strict distinction must be made between the cases where the one or the other of the bodies is in motion. If, for example, the magnet moves and the conductor is at rest, then an electric field of certain energy-value is produced in the neighbourhood of the magnet, which excites a current in those parts of the field where a conductor exists. But if the magnet be at rest and the conductor be set in motion, no electric field is produced in the neighbourhood of the magnet, but an electromotive force is produced in the conductor which corresponds to no energy per se; however, this causes – equality of the relative motion in both considered cases is assumed – an electric current of the same magnitude and the same course, as the electric force in the first case.[p 2]
Magnet and conductor thought experiment

This opening paragraph recounts well-known experimental results obtained by Michael Faraday in 1831. The experiments describe what appeared to be two different phenomena: the motional EMF generated when a wire moves through a magnetic field (see Lorentz force), and the transformer EMF generated by a changing magnetic field (due to the Maxwell–Faraday equation).[9][10][11]:135–157 James Clerk Maxwell himself drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gave a separate physical explanation for each of the two phenomena.[p 3]

Although Einstein calls the asymmetry "well-known", there is no evidence that any of Einstein's contemporaries considered the distinction between motional EMF and transformer EMF to be in any way odd or pointing to a lack of understanding of the underlying physics. Maxwell, for instance, had repeatedly discussed Faraday's laws of induction, stressing that the magnitude and direction of the induced current was a function only of the relative motion of the magnet and the conductor, without being bothered by the clear distinction between conductor-in-motion and magnet-in-motion in the underlying theoretical treatment.[11]:135–138

Yet Einstein's reflection on this experiment represented the decisive moment in his long and tortuous path to special relativity. Although the equations describing the two scenarios are entirely different, there is no measurement that can distinguish whether the magnet is moving, the conductor is moving, or both.[10]

In a 1920 review on the Fundamental Ideas and Methods of the Theory of Relativity (unpublished), Einstein related how disturbing he found this asymmetry:
The idea that these two cases should essentially be different was unbearable to me. According to my conviction, the difference between the two could only lie in the choice of the point of view, but not in a real difference .[p 4]:20
Einstein needed to extend the relativity of motion that he perceived between magnet and conductor in the above thought experiment to a full theory. For years, however, he did not know how this might be done. The exact path that Einstein took to resolve this issue is unknown. We do know, however, that Einstein spent several years pursuing an emission theory of light, encountering difficulties that eventually led him to give up the attempt.[10]
Gradually I despaired of the possibility of discovering the true laws by means of constructive efforts based on known facts. The longer and more desperately I tried, the more I came to the conviction that only the discovery of a universal formal principle could lead us to assured results.[p 1]:49
That decision ultimately led to his development of special relativity as a theory founded on two postulates of which he could be sure.[10] Expressed in contemporary physics vocabulary, his postulates were as follows:[note 1]
1. The laws of physics take the same form in all inertial frames.
2. In any given inertial frame, the velocity of light c is the same whether the light be emitted by a body at rest or by a body in uniform motion. [Emphasis added by editor][12]:140–141
Einstein's wording of the second postulate was one with which nearly all theorists of his day could agree. His wording is a far more intuitive form of the second postulate than the stronger version frequently encountered in popular writings and college textbooks.[13][note 2]

Trains, embankments, and lightning flashes

The topic of how Einstein arrived at special relativity has been a fascinating one to many scholars, and it is not hard to understand why: A lowly, twenty-six year old patent officer (third class), largely self-taught in physics and completely divorced from mainstream research, nevertheless in his miracle year of 1905 produces four extraordinary works, only one of which (his paper on Brownian motion) appeared related to anything that he had ever published before.[8]

Einstein's paper, On the Electrodynamics of Moving Bodies, is a polished work that bears few traces of its gestation. Documentary evidence concerning the development of the ideas that went into it consist of, quite literally, only two sentences in a handful of preserved early letters, and various later historical remarks by Einstein himself, some of them known only second-hand and at times contradictory.[8]
 
Train and embankment thought experiment

In regards to the relativity of simultaneity, Einstein's 1905 paper develops the concept vividly by carefully considering the basics of how time may be disseminated through the exchange of signals between clocks.[15] In his popular work, Relativity: The Special and General Theory, Einstein translates the formal presentation of his paper into a thought experiment using a train, a railway embankment, and lightning flashes. The essence of the thought experiment is as follows:
  • Observer M stands on an embankment, while observer M' rides on a rapidly traveling train. At the precise moment that M and M' coincide in their positions, lightning strikes points A and B equidistant from M and M'.
  • Light from these two flashes reach M at the same time, from which M concludes that the bolts were synchronous.
  • The combination of Einstein's first and second postulates implies that, despite the rapid motion of the train relative to the embankment, M' measures exactly the same speed of light as does M. Since M' was equidistant from A and B when lightning struck, the fact that M' receives light from B before light from A means that to M', the bolts were not synchronous. Instead, the bolt at B struck first.[p 5]:29–31 [note 3]
A routine supposition among historians of science is that, in accordance with the analysis given in his 1905 special relativity paper and in his popular writings, Einstein discovered the relativity of simultaneity by thinking about how clocks could be synchronized by light signals.[15] The Einstein synchronization convention was originally developed by telegraphers in the middle 19th century. The dissemination of precise time was an increasingly important topic during this period. Trains needed accurate time to schedule use of track, cartographers needed accurate time to determine longitude, while astronomers and surveyors dared to consider the worldwide dissemination of time to accuracies of thousandths of a second.[16]:132–144;183–187 Following this line of argument, Einstein's position in the patent office, where he specialized in evaluating electromagnetic and electromechanical patents, would have exposed him to the latest developments in time technology, which would have guided him in his thoughts towards understanding the relativity of simultaneity.[16]:243–263

However, all of the above is supposition. In later recollections, when Einstein was asked about what inspired him to develop special relativity, he would mention his riding a light beam and his magnet and conductor thought experiments. He would also mention the importance of the Fizeau experiment and the observation of stellar aberration. "They were enough", he said.[17] He never mentioned thought experiments about clocks and their synchronization.[15]

The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration.[15]

We therefore do not know just how important clock synchronization and the train and embankment thought experiment were to Einstein's development of the concept of the relativity of simultaneity. We do know, however, that the train and embankment thought experiment was the preferred means whereby he chose to teach this concept to the general public.[p 5]:29–31

General relativity

Falling painters and accelerating elevators

In his unpublished 1920 review, Einstein related the genesis of his thoughts on the equivalence principle:
When I was busy (in 1907) writing a summary of my work on the theory of special relativity for the Jahrbuch für Radioaktivität und Elektronik [Yearbook for Radioactivity and Electronics], I also had to try to modify the Newtonian theory of gravitation such as to fit its laws into the theory. While attempts in this direction showed the practicability of this enterprise, they did not satisfy me because they would have had to be based upon unfounded physical hypotheses. At that moment I got the happiest thought of my life in the following form: In an example worth considering, the gravitational field has a relative existence only in a manner similar to the electric field generated by magneto-electric induction. Because for an observer in free-fall from the roof of a house there is during the fall—at least in his immediate vicinity—no gravitational field. Namely, if the observer lets go of any bodies, they remain relative to him, in a state of rest or uniform motion, independent of their special chemical or physical nature. The observer, therefore, is justified in interpreting his state as being "at rest."[p 4]:20–21
The realization "startled" Einstein, and inspired him to begin an eight-year quest that led to what is considered to be his greatest work, the theory of general relativity. Over the years, the story of the falling man has become an iconic one, much embellished by other writers. In most retellings of Einstein's story, the falling man is identified as a painter. In some accounts, Einstein was inspired after he witnessed a painter falling from the roof of a building adjacent to the patent office where he worked. This version of the story leaves unanswered the question of why Einstein might consider his observation of such an unfortunate accident to represent the happiest thought in his life.[6]:145
 
A thought experiment used by Einstein to illustrate the equivalence principle

Einstein later refined his thought experiment to consider a man inside a large enclosed chest or elevator falling freely in space. While in free fall, the man would consider himself weightless, and any loose objects that he emptied from his pockets would float alongside him. Then Einstein imagined a rope attached to the roof of the chamber. A powerful "being" of some sort begins pulling on the rope with constant force. The chamber begins to move "upwards" with a uniformly accelerated motion. Within the chamber, all of the man's perceptions are consistent with his being in a uniform gravitational field. Einstein asked, "Ought we to smile at the man and say that he errs in his conclusion?" Einstein answered no. Rather, the thought experiment provided "good grounds for extending the principle of relativity to include bodies of reference which are accelerated with respect to each other, and as a result we have gained a powerful argument for a generalised postulate of relativity."[p 5]:75–79 [6]:145–147

Through this thought experiment, Einstein addressed an issue that was so well-known, scientists rarely worried about it or considered it puzzling: Objects have "gravitational mass," which determines the force with which they are attracted to other objects. Objects also have "inertial mass," which determines the relationship between the force applied to an object and how much it accelerates. Newton had pointed out that, even though they are defined differently, gravitational mass and inertial mass always seem to be equal. But until Einstein, no one had conceived a good explanation as to why this should be so. From the correspondence revealed by his thought experiment, Einstein concluded that "it is impossible to discover by experiment whether a given system of coordinates is accelerated, or whether...the observed effects are due to a gravitational field." This correspondence between gravitational mass and inertial mass is the equivalence principle.[6]:147

An extension to his accelerating observer thought experiment allowed Einstein to deduce that "rays of light are propagated curvilinearly in gravitational fields."[p 5]:83–84 [6]:190

Quantum mechanics

Background: Einstein and the quantum

Many myths have grown up about Einstein's relationship with quantum mechanics. Freshman physics students are aware that Einstein explained the photoelectric effect and introduced the concept of the photon. But students who have grown up with the photon may not be aware of how revolutionary the concept was for his time. The best-known factoids about Einstein's relationship with quantum mechanics are his statement, "God does not play dice" and the indisputable fact that he just didn't like the theory in its final form. This has led to the general impression that, despite his initial contributions, Einstein was out of touch with quantum research and played at best a secondary role in its development.[18]:1–4 Concerning Einstein's estrangement from the general direction of physics research after 1925, his well-known scientific biographer, Abraham Pais, wrote:
Einstein is the only scientist to be justly held equal to Newton. That comparison is based exclusively on what he did before 1925. In the remaining 30 years of his life he remained active in research but his fame would be undiminished, if not enhanced, had he gone fishing instead.[19]:43
In hindsight, we know that Pais was incorrect in his assessment.

Einstein was arguably the greatest single contributor to the "old" quantum theory.[18][note 4]
  • In his 1905 paper on light quanta,[p 6] Einstein created the quantum theory of light. His proposal that light exists as tiny packets (photons) was so revolutionary, that even such major pioneers of quantum theory as Planck and Bohr refused to believe that it could be true.[18]:70–79;282–284 [note 5] Bohr, in particular, was a passionate disbeliever in light quanta, and repeatedly argued against them until 1925, when he yielded in the face of overwhelming evidence for their existence.[21]
  • In his 1906 theory of specific heats, Einstein was the first to realize that quantized energy levels explained the specific heat of solids.[p 7] In this manner, he found a rational justification for the third law of thermodynamics (i.e. the entropy of any system approaches zero as the temperature approaches absolute zero[note 6]): at very cold temperatures, atoms in a solid don't have enough thermal energy to reach even the first excited quantum level, and so cannot vibrate.[18]:141–148 [note 7]
  • Einstein proposed the wave-particle duality of light. In 1909, using a rigorous fluctuation argument based on a thought experiment and drawing on his previous work on Brownian motion, he predicted the emergence of a "fusion theory" that would combine the two views.[18]:136–140 [p 8] [p 9] Basically, he demonstrated that the Brownian motion experienced by a mirror in thermal equilibrium with black body radiation would be the sum of two terms, one due to the wave properties of radiation, the other due to its particulate properties.[3]
  • Although Planck is justly hailed as the father of quantum mechanics, his derivation of the law of black-body radiation rested on fragile ground, since it required ad hoc assumptions of an unreasonable character.[note 8] Furthermore, Planck's derivation represented an analysis of classical harmonic oscillators merged with quantum assumptions in an improvised fashion.[18]:184 In his 1916 theory of radiation, Einstein was the first to create a purely quantum explanation.[p 10] This paper, well-known for broaching the possibility of stimulated emission (the basis of the laser), changed the nature of the evolving quantum theory by introducing the fundamental role of random chance.[18]:181–192
  • In 1924, Einstein received a short manuscript by an unknown Indian professor, Satyendra Nath Bose, outlining a new method of deriving the law of blackbody radiation.[note 9] Einstein was intrigued by Bose's peculiar method of counting the number of distinct ways of putting photons into the available states, a method of counting that Bose apparently did not realize was unusual.[note 10] Einstein, however, understood that Bose's counting method implied that photons are, in a deep sense, indistinguishable. He translated the paper into German and had it published. Einstein then followed Bose's paper with an extension to Bose's work which predicted Bose-Einstein condensation, one of the fundamental research topics of condensed matter physics.[18]:215–240
  • While trying to develop a mathematical theory of light which would fully encompass its wavelike and particle-like aspects, Einstein developed the concept of "ghost fields". A guiding wave obeying Maxwell's classical laws would propagate following the normal laws of optics, but would not transmit any energy. This guiding wave, however, would govern the appearance of quanta of energy  h \nu on a statistical basis, so that the appearance of these quanta would be proportional to the intensity of the interference radiation. These ideas became widely known in the physics community, and through Born's work in 1926, later became a key concept in the modern quantum theory of radiation and matter.[18]:193–203 [note 11]
Therefore, Einstein before 1925 originated most of the key concepts of quantum theory: light quanta, wave-particle duality, the fundamental randomness of physical processes, the concept of indistinguishability, and the probability density interpretation of the wave equation. In addition, Einstein can arguably be considered the father of solid state physics and condensed matter physics.[24] He provided a correct derivation of the blackbody radiation law and sparked the notion of the laser.

What of after 1925? In 1935, working with two younger colleagues, Einstein issued a final challenge to quantum mechanics, attempting to show that it could not represent a final solution.[p 12] Despite the questions raised by this paper, it made little or no difference to how physicists employed quantum mechanics in their work. Of this paper, Pais was to write:
The only part of this article that will ultimately survive, I believe, is this last phrase [i.e. "No reasonable definition of reality could be expect to permit this" where "this" refers to the instantaneous transmission of information over a distance], which so poignantly summarizes Einstein's views on quantum mechanics in his later years....This conclusion has not affected subsequent developments in physics, and it is doubtful that it ever will.[12]:454–457
In contrast to Pais' negative assessment, this paper, outlining the EPR paradox, is currently among the top ten papers published in Physical Review, and is the centerpiece of the development of quantum information theory,[25] which has been termed the "third quantum revolution."[26] [note 12]

Einstein's light box

Einstein did not like the direction in which quantum mechanics had turned after 1925. Although excited by Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Born's clarification of the meaning of the Schroedinger wave equation (i.e. that the absolute square of the wave function is to be interpreted as a probability density), his instincts told him that something was missing.[6]:326–335 In a letter to Born, he wrote:
Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory produces a good deal but hardly brings us closer to the secret of the Old One.[12]:440–443
The Solvay Debates between Bohr and Einstein began in dining-room discussions at the Fifth Solvay International Conference on Electrons and Photons in 1927. Einstein's issue with the new quantum mechanics was not just that, with the probability interpretation, it rendered invalid the notion of rigorous causality. After all, as noted above, Einstein himself had introduced random processes in his 1916 theory of radiation. Rather, by defining and delimiting the maximum amount of information obtainable in a given experimental arrangement, the Heisenberg uncertainty principle denied the existence of any knowable reality in terms of a complete specification of the momenta and description of individual particles, an objective reality that would exist whether or not we could ever observe it.[6]:325–326 [12]:443–446

Over dinner, during after-dinner discussions, and at breakfast, Einstein debated with Bohr and his followers on the question whether quantum mechanics in its present form could be called complete. Einstein illustrated his points with increasingly clever thought experiments intended to prove that position and momentum could in principle be simultaneously known to arbitrary precision. For example, one of his thought experiments involved sending a beam of electrons through a shuttered screen, recording the positions of the electrons as they struck a photographic screen. Bohr and his allies would always be able to counter Einstein's proposal, usually by the end of the same day.[6]:344–347

On the final day of the conference, Einstein revealed that the uncertainty principle was not the only aspect of the new quantum mechanics that bothered him. Quantum mechanics, at least in the Copenhagen interpretation, appeared to allow action at a distance, the ability for two separated objects to communicate at speeds greater than light. By 1928, the consensus was that Einstein had lost the debate, and even his closest allies during the Fifth Solvay Conference, for example Louis de Broglie, conceded that quantum mechanics appeared to be complete.[6]:346–347
 
Einstein's light box

At the Sixth Solvay International Conference on Magnetism (1930), Einstein came armed with a new thought experiment. This involved a box with a shutter that operated so quickly, it would allow only one photon to escape at a time. The box would first be weighed exactly. Then, at a precise moment, the shutter would open, allowing a photon to escape. The box would then be re-weighed. The well-known relationship between mass and energy E=mc^{2} would allow the energy of the particle to be precisely determined. With this gadget, Einstein believed that he had demonstrated a means to obtain, simultaneously, a precise determination of the energy of the photon as well as its exact time of departure from the system.[6]:346–347 [12]:446–448

Bohr was shaken by this thought experiment. Unable to think of a refutation, he went from one conference participant to another, trying to convince them that Einstein's thought experiment couldn't be true, that if it were true, it would literally mean the end of physics. After a sleepless night, he finally worked out a response which, ironically, depended on Einstein's general relativity.[6]:348–349 Consider the illustration of Einstein's light box:[12]:446–448
1. After emitting a photon, the loss of weight causes the box to rise in the gravitational field.
2. The observer returns the box to its original height by adding weights until the pointer points to its initial position. It takes a certain amount of time t for the observer to perform this procedure. How long it takes depends on the strength of the spring and on how well-damped the system is. If undamped, the box will bounce up and down forever. If over-damped, the box will return to its original position sluggishly (See Damped spring-mass system).[note 13]
3. The longer that the observer allows the damped spring-mass system to settle, the closer the pointer will reach its equilibrium position. At some point, the observer will conclude that his setting of the pointer to its initial position is within an allowable tolerance. There will be some residual error \Delta q in returning the pointer to its initial position. Correspondingly, there will be some residual error \Delta m in the weight measurement.
4. Adding the weights imparts a momentum p to the box which can be measured with an accuracy \Delta p delimited by {\displaystyle \Delta p\Delta q\approx h.} It is clear that {\displaystyle \Delta p<gt\Delta m,} where g is the gravitational constant. Plugging in yields {\displaystyle gt\Delta m\Delta q>h.}
5. General relativity informs us that while the box has been at a height different than its original height, it has been ticking at a rate different than its original rate. The red shift formula informs us that there will be an uncertainty {\displaystyle \Delta t=c^{-2}gt\Delta q} in the determination of {\displaystyle t_{0},} the emission time of the photon.
6. Hence, {\displaystyle c^{2}\Delta m\Delta t=\Delta E\Delta t>h.} The accuracy with which the energy of the photon is measured restricts the precision with which its moment of emission can be measured, following the Heisenberg uncertainty principle.
After finding his last attempt at finding a loophole around the uncertainty principle refuted, Einstein quit trying to search for inconsistencies in quantum mechanics. Instead, he shifted his focus to the other aspects of quantum mechanics with which he was uncomfortable, focusing on his critique of action at a distance. His next paper on quantum mechanics foreshadowed his later paper on the EPR paradox.[12]:448

Einstein was gracious in his defeat. The following September, Einstein nominated Heisenberg and Schroedinger for the Nobel Prize, stating, "I am convinced that this theory undoubtedly contains a part of the ultimate truth."[12]:448

EPR Paradox

Both Bohr and Einstein were subtle men. Einstein tried very hard to show that quantum mechanics was inconsistent; Bohr, however, was always able to counter his arguments. But in his final attack Einstein pointed to something so deep, so counterintuitive, so troubling, and yet so exciting, that at the beginning of the twenty-first century it has returned to fascinate theoretical physicists. Bohr’s only answer to Einstein’s last great discovery—the discovery of entanglement—was to ignore it.
Einstein's fundamental dispute with quantum mechanics wasn't about whether God rolled dice, whether the uncertainty principle allowed simultaneous measurement of position and momentum, or even whether quantum mechanics was complete. It was about reality. Does a physical reality exist independent of our ability to observe it? To Bohr and his followers, such questions were meaningless. All that we can know are the results of measurements and observations. It makes no sense to speculate about an ultimate reality that exists beyond our perceptions.[6]:460–461

Einstein's beliefs had evolved over the years from those that he had held when he was young, when, as a logical positivist heavily influenced by his reading of David Hume and Ernst Mach, he had rejected such unobservable concepts as absolute time and space. Einstein believed:[6]:460–461
1. A reality exists independent of our ability to observe it.
2. Objects are located at distinct points in spacetime and have their own independent, real existence. In other words, he believed in separability and locality.
3. Although at a superficial level, quantum events may appear random, at some ultimate level, strict causality underlies all processes in nature.
EPR paradox thought experiment. (top) The total wave function of a particle pair spreads from the collision point. (bottom) Observation of one particle collapses the wave function.

Einstein considered that realism and localism were fundamental underpinnings of physics. After leaving Nazi Germany and settling in Princeton at the Institute for Advanced Studies, Einstein began writing up a thought experiment that he had been mulling over since attending a lecture by Léon Rosenfeld in 1933. Since the paper was to be in English, Einstein enlisted the help of the 46-year-old Boris Podolsky, a fellow who had moved to the Institute from Caltech; he also enlisted the help of the 26-year-old Nathan Rosen, also at the Institute, who did much of the math.[note 14] The result of their collaboration was the four page EPR paper, which in its title asked the question Can Quantum-Mechanical Description of Physical Reality be Considered Complete?[6]:448–450 [p 12]

After seeing the paper in print, Einstein found himself unhappy with the result. His clear conceptual visualization had been buried under layers of mathematical formalism.[6]:448–450

Einstein's thought experiment involved two particles that have collided or which have been created in such a way that they have properties which are correlated. The total wave function for the pair links the positions of the particles as well as their linear momenta.[6]:450–453 [25] The figure depicts the spreading of the wave function from the collision point. However, observation of the position of the first particle allows us to determine precisely the position of the second particle no matter how far the pair have separated. Likewise, measuring the momentum of the first particle allows us to determine precisely the momentum of the second particle. "In accordance with our criterion for reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality."[p 12]

Einstein concluded that the second particle, which we have never directly observed, must have at any moment a position that is real and a momentum that is real. Quantum mechanics does not account for these features of reality. Therefore, quantum mechanics is not complete.[6]:451 It is known, from the uncertainty principle, that position and momentum cannot be measured at the same time. But even though their values can only be determined in distinct contexts of measurement, can they both be definite at the same time? Einstein concluded that the answer must be yes.[25]

The only alternative, claimed Einstein, would be to assert that measuring the first particle instantaneously affected the reality of the position and momentum of the second particle.[6]:451 "No reasonable definition of reality could be expected to permit this."[p 12]

Bohr was stunned when he read Einstein's paper and spent more than six weeks framing his response, which he gave exactly the same title as the EPR paper.[p 16] The EPR paper forced Bohr to make a major revision in his understanding of complementarity in the Copenhagen interpretation of quantum mechanics.[25]

Prior to EPR, Bohr had maintained that disturbance caused by the act of observation was the physical explanation for quantum uncertainty. In the EPR thought experiment, however, Bohr had to admit that "there is no question of a mechanical disturbance of the system under investigation." On the other hand, he noted that the two particles were one system described by one quantum function. Furthermore, the EPR paper did nothing to dispel the uncertainty principle.[12]:454–457 [note 15]

Later commentators have questioned the strength and coherence of Bohr's response. As a practical matter, however, physicists for the most part did not pay much attention to the debate between Bohr and Einstein, since the opposing views did not affect one's ability to apply quantum mechanics to practical problems, but only affected one's interpretation of the quantum formalism. If they thought about the problem at all, most working physicists tended to follow Bohr's leadership.[25][30][31]

So stood the situation for nearly 30 years. Then, in 1964, John Stewart Bell made the groundbreaking discovery that Einstein's local realist world view made experimentally verifiable predictions that would be in conflict with those of quantum mechanics. Bell's discovery shifted the Einstein–Bohr debate from philosophy to the realm of experimental physics. Bell's theorem showed that, for any local realist formalism, there exist limits on the predicted correlations between pairs of particles in an experimental realization of the EPR thought experiment. In 1972, the first experimental tests were carried out. Successive experiments improved the accuracy of observation and closed loopholes. To date, it is virtually certain that local realist theories have been falsified.[32]

So Einstein was wrong. But it has several times been the case that Einstein's "mistakes" have foreshadowed and provoked major shifts in scientific research. Such, for instance, has been the case with his proposal of the cosmological constant, which Einstein considered his greatest blunder, but which currently is being actively investigated for its possible role in the accelerating expansion of the universe. In his Princeton years, Einstein was virtually shunned as he pursued the unified field theory. Nowadays, innumerable physicists pursue Einstein's dream for a "theory of everything."[33]

The EPR paper did not prove quantum mechanics to be incorrect. What it did prove was that quantum mechanics, with its "spooky action at a distance," is completely incompatible with commonsense understanding.[34] Furthermore, the effect predicted by the EPR paper, quantum entanglement, has inspired approaches to quantum mechanics different from the Copenhagen interpretation, and has been at the forefront of major technological advances in quantum computing, quantum encryption, and quantum information theory.

Is Fermat's Last Theorem Stated Too Narrowly?

I am not a mathematician, but I do enjoy math and numbers, and can't resist playing around a little.  One of my "toys" has been Fermat's Last Theorem.  For a broader explanation of this theorem and the proof of it, see https://en.wikipedia.org/wiki/Fermat's_Last_Theorem, which I will summarize here with the first paragraph:

"In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n greater than 2. The cases n = 1 and n = 2 have been known to have infinitely many solutions since antiquity."

As you might know, the proof of this theorem was, and still is, an historic mystery. Fermat claimed to have proved it himself around 1637, but never provided the proof, only a claim "in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin." Yet if Fermat truly did have a valid proof, he never wrote it out, even until his death in 1665.


It was only in 1994. 357 years later after Fermat's cryptic and unproved claim that the mathematician Andrew Wiles finally presented the first proof of the theorem which was accepted by the mathematical community as correct and flawless (please don't ask me to explain it, however!).  This in itself is astonishing enough, but what is more astonishing is the fact that Wiles used mathematical techniques which did not exist in Fermat's time.  Thus, if Fermat really did supply a proof, he did so using techniques much simpler -- techniques which have evaded the most brilliant math minds for over three hundred years!  One can only conclude that either Fermat erred in his claim, or a straightforward proof has been eluding such minds to this day.

Again, I warn I am no mathematician, but since Andrew Wiles' proof I have wondered if Fermat's theorem can be expanded upon.  The theorem may follow a pattern, in which the number of integers being added must equal the exponent itself.  For example, for 32 + 42 = 52, the simplest example of the theorem, that number is two.

What about three?  It turns out there is an example here, one that I stumbled upon quite accidentally:  33 + 43 + 53 = 63. Now I suspect there are more examples for three, just as there are for two, although I have not tried to find any of them.  Nor have I attempted to find examples for integers greater than three, though I suspect they exist.

You probably see where I'm going with this.  Is it a reasonable conjecture that, for all (positive?) integers n, there are sums of n integers raised to the n'th power which will yield another integer raised to n?  Furthermore, is it an iff (if and only if) condition for all integers greater than one?

Of course, I've no idea.  I certainly unable to prove or disprove such a conjecture.  But it does seem worthy to make the attempt (if no one has done so yet).  Perhaps one of you can do so, or at least has some logical input on the question.  I'll leave it at that.




Measurement problem

From Wikipedia, the free encyclopedia
 
The measurement problem in quantum mechanics is the problem of how (or whether) wave function collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states, but actual measurements always find the physical system in a definite state. Any future evolution is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution.

To express matters differently (to paraphrase Steven Weinberg[1][2]), the Schrödinger wave equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum and classical reality?[3]

Schrödinger's cat

The best known example is the "paradox" of the Schrödinger's cat. A mechanism is arranged to kill a cat if a quantum event, such as the decay of a radioactive atom, occurs. Thus the fate of a large scale object, the cat, is entangled with the fate of a quantum object, the atom. Prior to observation, according to the Schrödinger equation, the cat is apparently evolving into a linear combination of states that can be characterized as an "alive cat" and states that can be characterized as a "dead cat". Each of these possibilities is associated with a specific nonzero probability amplitude; the cat seems to be in some kind of "combination" state called a "quantum superposition". However, a single, particular observation of the cat does not measure the probabilities: it always finds either a living cat, or a dead cat. After the measurement the cat is definitively alive or dead. The question is: How are the probabilities converted into an actual, sharply well-defined outcome?

Interpretations

The Copenhagen interpretation is the oldest and probably still the most widely held interpretation of quantum mechanics.[4][5][6] Most generally it posits something in the act of observation which results in the collapse of the wave function. According to the von Neumann–Wigner interpretation the causative agent in this collapse is consciousness.[7] How this could happen is widely disputed. Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate the way that in measurements the probabilistic nature of quantum mechanics would appear; work later extended by Bryce DeWitt.

De Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space which is where apparent wave function collapse comes from even though there is no actual collapse.

The Ghirardi–Rimini–Weber (GRW) theory differs from other collapse theories by proposing that wave function collapse happens spontaneously. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years.[8] Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (via quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus.

Erich Joos and Heinz-Dieter Zeh claim that the phenomenon of quantum decoherence, which was put on firm ground in the 1980s, resolves the problem.[9] The idea is that the environment causes the classical appearance of macroscopic objects. Zeh further claims that decoherence makes it possible to identify the fuzzy boundary between the quantum microworld and the world where the classical intuition is applicable.[10][11] Quantum decoherence was proposed in the context of the many-worlds interpretation[citation needed], but it has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories.[12][13] Quantum decoherence does not describe the actual process of the wave function collapse, but it explains the conversion of the quantum probabilities (that exhibit interference effects) to the ordinary classical probabilities. See, for example, Zurek,[3] Zeh[10] and Schlosshauer.[14]

The present situation is slowly clarifying, as described in a recent paper by Schlosshauer as follows:[15]
Several decoherence-unrelated proposals have been put forward in the past to elucidate the meaning of probabilities and arrive at the Born rule ... It is fair to say that no decisive conclusion appears to have been reached as to the success of these derivations. ...
As it is well known, [many papers by Bohr insist upon] the fundamental role of classical concepts. The experimental evidence for superpositions of macroscopically distinct states on increasingly large length scales counters such a dictum. Superpositions appear to be novel and individually existing states, often without any classical counterparts. Only the physical interactions between systems then determine a particular decomposition into classical states from the view of each particular system. Thus classical concepts are to be understood as locally emergent in a relative-state sense and should no longer claim a fundamental role in the physical theory.
A fourth approach is given by objective collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to a behaviour which for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wave function. Objective collapse models are effective theories. The stochastic modification is thought of to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested.[16]

An interesting solution to the measurement problem is also provided by the hidden-measurements interpretation of quantum mechanics. The hypothesis at the basis of this approach is that in a typical quantum measurement there is a condition of lack of knowledge about which interaction between the measured entity and the measuring apparatus is actualized at each run of the experiment. One can then show that the Born rule can be derived by considering a uniform average over all these possible measurement-interactions.

The $2.5 trillion reason we can’t rely on batteries to clean up the grid

Fluctuating solar and wind power require lots of energy storage, and lithium-ion batteries seem like the obvious choice—but they are far too expensive to play a major role.

A pair of 500-foot smokestacks rise from a natural-gas power plant on the harbor of Moss Landing, California, casting an industrial pall over the pretty seaside town.
If state regulators sign off, however, it could be the site of the world’s largest lithium-ion battery project by late 2020, helping to balance fluctuating wind and solar energy on the California grid.

The 300-megawatt facility is one of four giant lithium-ion storage projects that Pacific Gas and Electric, California’s largest utility, asked the California Public Utilities Commission to approve in late June. Collectively, they would add enough storage capacity to the grid to supply about 2,700 homes for a month (or to store about .0009 percent of the electricity the state uses each year).

The California projects are among a growing number of efforts around the world, including Tesla’s 100-megawatt battery array in South Australia, to build ever larger lithium-ion storage systems as prices decline and renewable generation increases. They’re fueling growing optimism that these giant batteries will allow wind and solar power to displace a growing share of fossil-fuel plants.

But there’s a problem with this rosy scenario. These batteries are far too expensive and don’t last nearly long enough, limiting the role they can play on the grid, experts say. If we plan to rely on them for massive amounts of storage as more renewables come online—rather than turning to a broader mix of low-carbon sources like nuclear and natural gas with carbon capture technology—we could be headed down a dangerously unaffordable path.

Small doses

Today’s battery storage technology works best in a limited role, as a substitute for “peaking” power plants, according to a 2016 analysis by researchers at MIT and Argonne National Lab. These are smaller facilities, frequently fueled by natural gas today, that can afford to operate infrequently, firing up quickly when prices and demand are high.

Lithium-ion batteries could compete economically with these natural-gas peakers within the next five years, says Marco Ferrara, a cofounder of Form Energy, an MIT spinout developing grid storage batteries.

“The gas peaker business is pretty close to ending, and lithium-ion is a great replacement,” he says.

This peaker role is precisely the one that most of the new and forthcoming lithium-ion battery projects are designed to fill. Indeed, the California storage projects could eventually replace three natural-gas facilities in the region, two of which are peaker plants.

But much beyond this role, batteries run into real problems. The authors of the 2016 study found steeply diminishing returns when a lot of battery storage is added to the grid. They concluded that coupling battery storage with renewable plants is a “weak substitute” for large, flexible coal or natural-gas combined-cycle plants, the type that can be tapped at any time, run continuously, and vary output levels to meet shifting demand throughout the day.

Not only is lithium-ion technology too expensive for this role, but limited battery life means it’s not well suited to filling gaps during the days, weeks, and even months when wind and solar generation flags.

This problem is particularly acute in California, where both wind and solar fall off precipitously during the fall and winter months. Here’s what the seasonal pattern looks like:
If renewables provided 80 percent of California electricity – half wind, half solar – generation would fall precipitously beginning in the late summer.
Clean Air Task Force analysis of CAISO data
This leads to a critical problem: when renewables reach high levels on the grid, you need far, far more wind and solar plants to crank out enough excess power during peak times to keep the grid operating through those long seasonal dips, says Jesse Jenkins, a coauthor of the study and an energy systems researcher. That, in turn, requires banks upon banks of batteries that can store it all away until it’s needed.

And that ends up being astronomically expensive.

California dreaming

There are issues California can’t afford to ignore for long. The state is already on track to get 50 percent of its electricity from clean sources by 2020, and the legislature is once again considering a bill that would require it to reach 100 percent by 2045. To complicate things, regulators voted in January to close the state’s last nuclear plant, a carbon-free source that provides 24 percent of PG&E’s energy. That will leave California heavily reliant on renewable sources to meet its goals.

The Clean Air Task Force, a Boston-based energy policy think tank, recently found that reaching the 80 percent mark for renewables in California would mean massive amounts of surplus generation during the summer months, requiring 9.6 million megawatt-hours of energy storage. Achieving 100 percent would require 36.3 million.

The state currently has 150,000 megawatt-hours of energy storage in total. (That’s mainly pumped hydroelectric storage, with a small share of batteries.)
If renewables supplied 80 percent of California electricity, more than eight million megawatt-hours of surplus energy would be generated during summer peaks.
Clean Air Task Force analysis of CAISO data.
Building the level of renewable generation and storage necessary to reach the state’s goals would drive up costs exponentially, from $49 per megawatt-hour of generation at 50 percent to $1,612 at 100 percent.

And that's assuming lithium-ion batteries will cost roughly a third what they do now.
California’s power system costs rise exponentially if renewables generate the bulk of electricity.
Clean Air Task Force analysis of CAISO data.
“The system becomes completely dominated by the cost of storage,” says Steve Brick, a senior advisor for the Clean Air Task Force. “You build this enormous storage machine that you fill up by midyear and then just dissipate it. It’s a massive capital investment that gets utilized very little.”

These forces would dramatically increase electricity costs for consumers.

“You have to pause and ask yourself: ‘Is there any way the public would stand for that?’” Brick says.

Similarly, a study earlier this year in Energy & Environmental Science found that meeting 80 percent of US electricity demand with wind and solar would require either a nationwide high-speed transmission system, which can balance renewable generation over hundreds of miles, or 12 hours of electricity storage for the whole system (see “Relying on renewables alone significantly inflates the cost of overhauling energy”).

At current prices, a battery storage system of that size would cost more than $2.5 trillion.

A scary price tag

Of course, cheaper and better grid storage is possible, and researchers and startups are exploring various possibilities. Form Energy, which recently secured funding from Bill Gates’s Breakthrough Energy Ventures, is trying to develop aqueous sulfur flow batteries with far longer duration, at a fifth the cost where lithium-ion batteries are likely to land.

Ferrara’s modeling has found that such a battery could make it possible for renewables to provide 90 percent of electricity needs for most grids, for just marginally higher costs than today’s.

But it’s dangerous to bank on those kinds of battery breakthroughs—and even if Form Energy or some other company does pull it off, costs would still rise exponentially beyond the 90 percent threshold, Ferrara says.

“The risk,” Jenkins says, “is we drive up the cost of deep decarbonization in the power sector to the point where the public decides it’s simply unaffordable to continue toward zero carbon.”

Hard problem of consciousness

From Wikipedia, the free encyclopedia

The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences—how sensations acquire characteristics, such as colors and tastes. The philosopher David Chalmers, who introduced the term "hard problem" of consciousness, contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".

The existence of a "hard problem" is controversial and has been disputed by philosophers such as Daniel Dennett[4] and cognitive neuroscientists such as Stanislas Dehaene.[5] Clinical neurologist and skeptic Steven Novella refers to it as "the hard non-problem".[6]

Formulation of the problem

Chalmers' formulation

In Facing Up to the Problem of Consciousness (1995), Chalmers wrote:[3]
It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
In the same paper, he also wrote:
The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.
The philosopher Raamy Majeed noted in 2016 that the hard problem is, in fact, associated with two "explanatory targets":[7]
  1. [PQ] Physical processing gives rise to experiences with a phenomenal character.
  2. [Q] Our phenomenal qualities are thus-and-so.
The first fact concerns the relationship between the physical and the phenomenal, whereas the second concerns the very nature of the phenomenal itself. Most responses to the hard problem are aimed at explaining either one of these facts or both.

Easy problems

Chalmers contrasts the hard problem with a number of (relatively) easy problems that consciousness presents. He emphasizes that what the easy problems have in common is that they all represent some ability, or the performance of some function or behavior. Examples of easy problems include:[8]
  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

Other formulations

Other formulations of the "hard problem" include:
  • "How is it that some organisms are subjects of experience?"
  • "Why does awareness of sensory information exist at all?"
  • "Why do qualia exist?"
  • "Why is there a subjective component to experience?"
  • "Why aren't we philosophical zombies?"

Historical predecessors

The hard problem has scholarly antecedents considerably earlier than Chalmers, as Chalmers himself has pointed out.[9]

The physicist and mathematician Isaac Newton wrote in a 1672 letter to Henry Oldenburg:
to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.[10]
In An Essay Concerning Human Understanding (1690), the philosopher and physician John Locke argued:
Divide matter into as minute parts as you will (which we are apt to imagine a sort of spiritualizing or making a thinking thing of it) vary the figure and motion of it as much as you please—a globe, cube, cone, prism, cylinder, etc., whose diameters are but 1,000,000th part of a gry, will operate not otherwise upon other bodies of proportionable bulk than those of an inch or foot diameter—and you may as rationally expect to produce sense, thought, and knowledge, by putting together, in a certain figure and motion, gross particles of matter, as by those that are the very minutest that do anywhere exist. They knock, impel, and resist one another, just as the greater do; and that is all they can do... [I]t is impossible to conceive that matter, either with or without motion, could have originally in and from itself sense, perception, and knowledge; as is evident from hence that then sense, perception, and knowledge must be a property eternally inseparable from matter and every particle of it.[11]
The polymath and philosopher Gottfried Leibniz wrote in 1714, as an example also known as Leibniz's gap:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.[12]
The philosopher and political economist J.S. Mill wrote in A System of Logic (1843), Book V, Chapter V, section 3:
Now I am far from pretending that it may not be capable of proof, or that it is not an important addition to our knowledge if proved, that certain motions in the particles of bodies are the conditions of the production of heat or light; that certain assignable physical modifications of the nerves may be the conditions not only of our sensations or emotions, but even of our thoughts; that certain mechanical and chemical conditions may, in the order of nature, be sufficient to determine to action the physiological laws of life. All I insist upon, in common with every thinker who entertains any clear idea of the logic of science, is, that it shall not be supposed that by proving these things one step would be made towards a real explanation of heat, light, or sensation; or that the generic peculiarity of those phenomena can be in the least degree evaded by any such discoveries, however well established. Let it be shown, for instance, that the most complex series of physical causes and effects succeed one another in the eye and in the brain to produce a sensation of colour; rays falling on the eye, refracted, converging, crossing one another, making an inverted image on the retina, and after this a motion—let it be a vibration, or a rush of nervous fluid, or whatever else you are pleased to suppose, along the optic nerve—a propagation of this motion to the brain itself, and as many more different motions as you choose; still, at the end of these motions, there is something which is not motion, there is a feeling or sensation of colour. Whatever number of motions we may be able to interpolate, and whether they be real or imaginary, we shall still find, at the end of the series, a motion antecedent and a colour consequent. The mode in which any one of the motions produces the next, may possibly be susceptible of explanation by some general law of motion: but the mode in which the last motion produces the sensation of colour, cannot be explained by any law of motion; it is the law of colour: which is, and must always remain, a peculiar thing. Where our consciousness recognises between two phenomena an inherent distinction; where we are sensible of a difference which is not merely of degree, and feel that no adding one of the phenomena to itself would produce the other; any theory which attempts to bring either under the laws of the other must be false; though a theory which merely treats the one as a cause or condition of the other, may possibly be true.
The biologist T.H. Huxley wrote in 1868:
But what consciousness is, we know not; and how it is that anything so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp in the story, or as any other ultimate fact of nature.[13]
The philosopher Thomas Nagel argued in 1974:
If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.[14]

Relationship to scientific frameworks

Neural correlates of consciousness

Since 1990, researchers including the molecular biologist Francis Crick and the neuroscientist Christof Koch have made significant progress toward identifying which neurobiological events occur concurrently to the experience of subjective consciousness.[15] These postulated events are referred to as neural correlates of consciousness or NCCs. However, this research arguably addresses the question of which neurobiological mechanisms are linked to consciousness but not the question of why they should give rise to consciousness at all, the latter being the hard problem of consciousness as Chalmers formulated it. In "On the Search for the Neural Correlate of Consciousness", Chalmers said he is confident that, granting the principle that something such as what he terms global availability can be used as an indicator of consciousness, the neural correlates will be discovered "in a century or two".[16] Nevertheless, he stated regarding their relationship to the hard problem of consciousness:
One can always ask why these processes of availability should give rise to consciousness in the first place. As yet we cannot explain why they do so, and it may well be that full details about the processes of availability will still fail to answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. [...] So the hard problem remains. But who knows: Somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved.[16]
The neuroscientist and Nobel laureate Eric Kandel wrote that locating the NCCs would not solve the hard problem, but rather one of the so-called easy problems to which the hard problem is contrasted.[17] Kandel went on to note Crick and Koch's suggestion that once the binding problem—understanding what accounts for the unity of experience—is solved, it will be possible to solve the hard problem empirically.[17] However, neuroscientist Anil Seth argued that emphasis on the so-called hard problem is a distraction from what he calls the "real problem": understanding the neurobiology underlying consciousness, namely the neural correlates of various conscious processes.[18] This more modest goal is the focus of most scientists working on consciousness.[17] Psychologist Susan Blackmore believes, by contrast, that the search for the neural correlates of consciousness is futile and itself predicated on an erroneous belief in the hard problem of consciousness.[19]

Integrated information theory

Integrated information theory (IIT), developed by the neuroscientist and psychiatrist Giulio Tononi in 2004 and more recently also advocated by Koch, is one of the most discussed models of consciousness in neuroscience and elsewhere.[20][21] The theory proposes an identity between consciousness and integrated information, with the latter item (denoted as Φ) defined mathematically and thus in principle measurable.[21][22] The hard problem of consciousness, write Tononi and Koch, may indeed be intractable when working from matter to consciousness.[23] However, because IIT inverts this relationship and works from phenomenological axioms to matter, they say it could be able to solve the hard problem.[23] In this vein, proponents have said the theory goes beyond identifying human neural correlates and can be extrapolated to all physical systems. Tononi wrote (along with two colleagues):
While identifying the “neural correlates of consciousness” is undoubtedly important, it is hard to see how it could ever lead to a satisfactory explanation of what consciousness is and how it comes about. As will be illustrated below, IIT offers a way to analyze systems of mechanisms to determine if they are properly structured to give rise to consciousness, how much of it, and of which kind.[24]
As part of a broader critique of IIT, Michael Cerullo suggested that the theory's proposed explanation is in fact for what he dubs (following Scott Aaronson) the "Pretty Hard Problem" of methodically inferring which physical systems are conscious—but would not solve Chalmers' hard problem.[21] "Even if IIT is correct," he argues, "it does not explain why integrated information generates (or is) consciousness."[21]

Responses

Consciousness is fundamental or elusive

Some philosophers, including David Chalmers in the late 20th century and Alfred North Whitehead earlier in the 1900s, argued that conscious experience is a fundamental constituent of the universe, a form of panpsychism sometimes referred to as panexperientialism. Chalmers argued that a "rich inner life" is not logically reducible to the functional properties of physical processes. He states that consciousness must be described using nonphysical means. This description involves a fundamental ingredient capable of clarifying phenomena that have not been explained using physical means. Use of this fundamental property, Chalmers argues, is necessary to explain certain functions of the world, much like other fundamental features, such as mass and time, and to explain significant principles in nature.

The philosopher Thomas Nagel posited in 1974 that experiences are essentially subjective (accessible only to the individual undergoing them), while physical states are essentially objective (accessible to multiple individuals). So at this stage, he argued, we have no idea what it could even mean to claim that an essentially subjective state just is an essentially non-subjective state. In other words, we have no idea of what reductivism really amounts to.[14]

New mysterianism, such as that of the philosopher Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness.[25]

Deflationary accounts

Some philosophers, such as Daniel Dennett[4] and Peter Hacker[26] oppose the idea that there is a hard problem. These theorists have argued that once we really come to understand what consciousness is, we will realize that the hard problem is unreal. For instance, Dennett asserts that the so-called hard problem will be solved in the process of answering the "easy" ones (which, as he has clarified, he does not consider "easy" at all).[4] In contrast with Chalmers, he argues that consciousness is not a fundamental feature of the universe and instead will eventually be fully explained by natural phenomena. Instead of involving the nonphysical, he says, consciousness merely plays tricks on people so that it appears nonphysical—in other words, it simply seems like it requires nonphysical features to account for its powers. In this way, Dennett compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[27]

To show how people might be commonly fooled into overstating the powers of consciousness, Dennett describes a normal phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images.[28] He uses this concept to argue that the overestimation of the brain's visual processing implies that the conception of our consciousness is likely not as pervasive as we make it out to be. He claims that this error of making consciousness more mysterious than it is could be a misstep in any developments toward an effective explanatory theory. Critics such as Galen Strawson reply that, in the case of consciousness, even a mistaken experience retains the essential face of experience that needs to be explained, contra Dennett.

To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness.[4] He states that consciousness itself is driven simply by these functions, and to strip them away would wipe out any ability to identify thoughts, feelings, and consciousness altogether. So, unlike Chalmers and other dualists, Dennett says that the easy problems and the hard problem cannot be separated from each other. To him, the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.[27]

Like Dennett, Hacker argues that the hard problem is fundamentally incoherent and that "consciousness studies", as it exists today, is "literally a total waste of time":[26]
The whole endeavour of the consciousness studies community is absurd—they are in pursuit of a chimera. They misunderstand the nature of consciousness. The conception of consciousness which they have is incoherent. The questions they are asking don't make sense. They have to go back to the drawing board and start all over again.
Critics of Dennett's approach, such as Chalmers and Nagel, argue that Dennett's argument misses the point of the inquiry by merely re-defining consciousness as an external property and ignoring the subjective aspect completely. This has led detractors to refer to Dennett's book Consciousness Explained as Consciousness Ignored or Consciousness Explained Away.[4] Dennett discussed this at the end of his book with a section entitled Consciousness Explained or Explained Away?[28]

Though the most common arguments against deflationary accounts and eliminative materialism are the argument from qualia and the argument that conscious experiences are irreducible to physical states—or that current popular definitions of "physical" are incomplete—the objection follows that the one and same reality can appear in different ways, and that the numerical difference of these ways is consistent with a unitary mode of existence of the reality.[citation needed] Critics of the deflationary approach object that qualia are a case where a single reality cannot have multiple appearances. For example, the philosopher John Searle pointed out: "where consciousness is concerned, the existence of the appearance is the reality".[29]

A notable deflationary account is the higher-order theories of consciousness.[30] In 2005, the philosopher Peter Carruthers wrote about "recognitional concepts of experience", that is, "a capacity to recognize [a] type of experience when it occurs in one's own mental life", and suggested that such a capacity does not depend upon qualia.[31]

The philosophers Glenn Carruthers and Elizabeth Schier said in 2012 that the main arguments for the existence of a hard problem—philosophical zombies, Mary's room, and Nagel's bats—are only persuasive if one already assumes that "consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem". Hence, the arguments beg the question. The authors suggest that "instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments".[32]

In 2013, the philosopher Elizabeth Irvine pointed out that both science and folk psychology do not treat mental states as having phenomenal properties, and therefore "the hard problem of consciousness may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers), and questions about consciousness may well 'shatter' into more specific questions about particular capacities".[33]

The philosopher Massimo Pigliucci distances himself from eliminativism, but he said in 2013 that the hard problem is still misguided, resulting from a "category mistake":[34]
Of course an explanation isn't the same as an experience, but that's because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.

The source of illusion

A complete reductionistic or mechanistic theory of consciousness must include the description of a mechanism by which the subjective aspect of consciousness is perceived and reported by people. Philosophers such as Chalmers or Nagel have rejected reductionist theories of consciousness because they believe that the reports of subjective experience constitute a vast and important body of empirical evidence which is ignored by modern reductionist theories of consciousness.[9]

Dennett argued that solving the easy problem of consciousness, that is finding out how the brain works, will eventually lead to the solution of the hard problem of consciousness.[4] In particular, the solution can be achieved by identifying the stimuli and neurological pathways whose operation generates evidence of subjective experience.

Neuroscientist Michael Graziano, in his book Consciousness and the Social Brain, advocates what he calls attention schema theory, in which our perception of being conscious is merely an error in perception, held by brains which evolved to hold erroneous and incomplete models of their own internal workings, just as they hold erroneous and incomplete models of their own bodies and of the external world.[35][36]

Cognitive neuroscientist Stanislas Dehaene, in his 2014 book Consciousness and the Brain, summarized the previous decades of experimental consciousness research involving reports of subjective experience, and argued that Chalmers' "easy problems" of consciousness are actually the hard problems and the "hard problems" are based only upon ill-defined intuitions that, according to Dehaene, are continually shifting as understanding evolves:[5]
Once our intuitions are educated by cognitive neuroscience and computer simulations, Chalmers' hard problem will evaporate. The hypothetical concept of qualia, pure mental experience, detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism... [Just as science dispatched vitalism] the science of consciousness will keep eating away at the hard problem of consciousness until it vanishes.

Imagination Age

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Imaginati...