Search This Blog

Thursday, May 25, 2023

Photoelectric effect

From Wikipedia, the free encyclopedia
The emission of electrons from a metal plate caused by light quanta – photons.

The photoelectric effect is the emission of electrons when electromagnetic radiation, such as light, hits a material. Electrons emitted in this manner are called photoelectrons. The phenomenon is studied in condensed matter physics, and solid state and quantum chemistry to draw inferences about the properties of atoms, molecules and solids. The effect has found use in electronic devices specialized for light detection and precisely timed electron emission.

The experimental results disagree with classical electromagnetism, which predicts that continuous light waves transfer energy to electrons, which would then be emitted when they accumulate enough energy. An alteration in the intensity of light would theoretically change the kinetic energy of the emitted electrons, with sufficiently dim light resulting in a delayed emission. The experimental results instead show that electrons are dislodged only when the light exceeds a certain frequency—regardless of the light's intensity or duration of exposure. Because a low-frequency beam at a high intensity does not build up the energy required to produce photoelectrons, as would be the case if light's energy accumulated over time from a continuous wave, Albert Einstein proposed that a beam of light is not a wave propagating through space, but a swarm of discrete energy packets, known as photons.

Emission of conduction electrons from typical metals requires a few electron-volt (eV) light quanta, corresponding to short-wavelength visible or ultraviolet light. In extreme cases, emissions are induced with photons approaching zero energy, like in systems with negative electron affinity and the emission from excited states, or a few hundred keV photons for core electrons in elements with a high atomic number.[1] Study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality. Other phenomena where light affects the movement of electric charges include the photoconductive effect, the photovoltaic effect, and the photoelectrochemical effect.

Emission mechanism

The photons of a light beam have a characteristic energy, called photon energy, which is proportional to the frequency of the light). In the photoemission process, when an electron within some material absorbs the energy of a photon and acquires more energy than its binding energy, it is likely to be ejected. If the photon energy is too low, the electron is unable to escape the material. Since an increase in the intensity of low-frequency light will only increase the number of low-energy photons, this change in intensity will not create any single photon with enough energy to dislodge an electron. Moreover, the energy of the emitted electrons will not depend on the intensity of the incoming light of a given frequency, but only on the energy of the individual photons.

While free electrons can absorb any energy when irradiated as long as this is followed by an immediate re-emission, like in the Compton effect, in quantum systems all of the energy from one photon is absorbed—if the process is allowed by quantum mechanics—or none at all. Part of the acquired energy is used to liberate the electron from its atomic binding, and the rest contributes to the electron's kinetic energy as a free particle. Because electrons in a material occupy many different quantum states with different binding energies, and because they can sustain energy losses on their way out of the material, the emitted electrons will have a range of kinetic energies. The electrons from the highest occupied states will have the highest kinetic energy. In metals, those electrons will be emitted from the Fermi level.

When the photoelectron is emitted into a solid rather than into a vacuum, the term internal photoemission is often used, and emission into a vacuum is distinguished as external photoemission.

Experimental observation of photoelectric emission

Even though photoemission can occur from any material, it is most readily observed from metals and other conductors. This is because the process produces a charge imbalance which, if not neutralized by current flow, results in the increasing potential barrier until the emission completely ceases. The energy barrier to photoemission is usually increased by nonconductive oxide layers on metal surfaces, so most practical experiments and devices based on the photoelectric effect use clean metal surfaces in evacuated tubes. Vacuum also helps observing the electrons since it prevents gases from impeding their flow between the electrodes.

As sunlight, due to atmosphere's absorption, does not provide much ultraviolet light, the light rich in ultraviolet rays used to be obtained by burning magnesium or from an arc lamp. At the present time, mercury-vapor lamps, noble-gas discharge UV lamps and radio-frequency plasma sources, ultraviolet lasers, and synchrotron insertion device light sources prevail.

Schematic of the experiment to demonstrate the photoelectric effect. Filtered, monochromatic light of a certain wavelength strikes the emitting electrode (E) inside a vacuum tube. The collector electrode (C) is biased to a voltage VC that can be set to attract the emitted electrons, when positive, or prevent any of them from reaching the collector when negative.

The classical setup to observe the photoelectric effect includes a light source, a set of filters to monochromatize the light, a vacuum tube transparent to ultraviolet light, an emitting electrode (E) exposed to the light, and a collector (C) whose voltage VC can be externally controlled.

A positive external voltage is used to direct the photoemitted electrons onto the collector. If the frequency and the intensity of the incident radiation are fixed, the photoelectric current I increases with an increase in the positive voltage, as more and more electrons are directed onto the electrode. When no additional photoelectrons can be collected, the photoelectric current attains a saturation value. This current can only increase with the increase of the intensity of light.

An increasing negative voltage prevents all but the highest-energy electrons from reaching the collector. When no current is observed through the tube, the negative voltage has reached the value that is high enough to slow down and stop the most energetic photoelectrons of kinetic energy Kmax. This value of the retarding voltage is called the stopping potential or cut off potential Vo. Since the work done by the retarding potential in stopping the electron of charge e is eVo, the following must hold eVo = Kmax.

The current-voltage curve is sigmoidal, but its exact shape depends on the experimental geometry and the electrode material properties.

For a given metal surface, there exists a certain minimum frequency of incident radiation below which no photoelectrons are emitted. This frequency is called the threshold frequency. Increasing the frequency of the incident beam increases the maximum kinetic energy of the emitted photoelectrons, and the stopping voltage has to increase. The number of emitted electrons may also change because the probability that each photon results in an emitted electron is a function of photon energy.

An increase in the intensity of the same monochromatic light (so long as the intensity is not too high[12]), which is proportional to the number of photons impinging on the surface in a given time, increases the rate at which electrons are ejected—the photoelectric current I—but the kinetic energy of the photoelectrons and the stopping voltage remain the same. For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is directly proportional to the intensity of the incident light.

The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10−9 second. Angular distribution of the photoelectrons is highly dependent on polarization (the direction of the electric field) of the incident light, as well as the emitting material's quantum properties such as atomic and molecular orbital symmetries and the electronic band structure of crystalline solids. In materials without macroscopic order, the distribution of electrons tends to peak in the direction of polarization of linearly polarized light. The experimental technique that can measure these distributions to infer the material's properties is angle-resolved photoemission spectroscopy.

Theoretical explanation

Diagram of the maximum kinetic energy as a function of the frequency of light on zinc.

In 1905, Einstein proposed a theory of the photoelectric effect using a concept that light consists of tiny packets of energy known as photons or light quanta. Each packet carries energy that is proportional to the frequency of the corresponding electromagnetic wave. The proportionality constant has become known as the Planck constant. In the range of kinetic energies of the electrons that are removed from their varying atomic bindings by the absorption of a photon of energy , the highest kinetic energy is

Here, is the minimum energy required to remove an electron from the surface of the material. It is called the work function of the surface and is sometimes denoted or . If the work function is written as
the formula for the maximum kinetic energy of the ejected electrons becomes

Kinetic energy is positive, and is required for the photoelectric effect to occur. The frequency is the threshold frequency for the given material. Above that frequency, the maximum kinetic energy of the photoelectrons as well as the stopping voltage in the experiment rise linearly with the frequency, and have no dependence on the number of photons and the intensity of the impinging monochromatic light. Einstein's formula, however simple, explained all the phenomenology of the photoelectric effect, and had far-reaching consequences in the development of quantum mechanics.

Photoemission from atoms, molecules and solids

Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies. When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy is found at kinetic energy . This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics.

Models of photoemission from solids

The electronic properties of ordered, crystalline solids are determined by the distribution of the electronic states with respect to energy and momentum—the electronic band structure of the solid. Theoretical models of photoemission from solids show that this distribution is, for the most part, preserved in the photoelectric effect. The phenomenological three-step model for ultraviolet and soft X-ray excitation decomposes the effect into these steps:

  1. Inner photoelectric effect in the bulk of the material that is a direct optical transition between an occupied and an unoccupied electronic state. This effect is subject to quantum-mechanical selection rules for dipole transitions. The hole left behind the electron can give rise to secondary electron emission, or the so-called Auger effect, which may be visible even when the primary photoelectron does not leave the material. In molecular solids phonons are excited in this step and may be visible as satellite lines in the final electron energy.
  2. Electron propagation to the surface in which some electrons may be scattered because of interactions with other constituents of the solid. Electrons that originate deeper in the solid are much more likely to suffer collisions and emerge with altered energy and momentum. Their mean-free path is a universal curve dependent on electron's energy.
  3. Electron escape through the surface barrier into free-electron-like states of the vacuum. In this step the electron loses energy in the amount of the work function of the surface, and suffers from the momentum loss in the direction perpendicular to the surface. Because the binding energy of electrons in solids is conveniently expressed with respect to the highest occupied state at the Fermi energy , and the difference to the free-space (vacuum) energy is the work function of the surface, the kinetic energy of the electrons emitted from solids is usually written as .

There are cases where the three-step model fails to explain peculiarities of the photoelectron intensity distributions. The more elaborate one-step model treats the effect as a coherent process of photoexcitation into the final state of a finite crystal for which the wave function is free-electron-like outside of the crystal, but has a decaying envelope inside.

History

19th century

In 1839, Alexandre Edmond Becquerel discovered the photovoltaic effect while studying the effect of light on electrolytic cells. Though not equivalent to the photoelectric effect, his work on photovoltaics was instrumental in showing a strong relationship between light and electronic properties of materials. In 1873, Willoughby Smith discovered photoconductivity in selenium while testing the metal for its high resistance properties in conjunction with his work involving submarine telegraph cables.

Johann Elster (1854–1920) and Hans Geitel (1855–1923), students in Heidelberg, investigated the effects produced by light on electrified bodies and developed the first practical photoelectric cells that could be used to measure the intensity of light. They arranged metals with respect to their power of discharging negative electricity: rubidium, potassium, alloy of potassium and sodium, sodium, lithium, magnesium, thallium and zinc; for copper, platinum, lead, iron, cadmium, carbon, and mercury the effects with ordinary light were too small to be measurable. The order of the metals for this effect was the same as in Volta's series for contact-electricity, the most electropositive metals giving the largest photo-electric effect.

The gold leaf electroscope to demonstrate the photoelectric effect. When the electroscope is negatively charged, there is an excess of electrons and the leaves are separated. If short wavelength, high-frequency light (such as ultraviolet light obtained from an arc lamp, or by burning magnesium, or by using an induction coil between zinc or cadmium terminals to produce sparking) shines on the cap, the electroscope discharges, and the leaves fall limp. If, however, the frequency of the light waves is below the threshold value for the cap, the leaves will not discharge, no matter how long one shines the light at the cap.

In 1887, Heinrich Hertz observed the photoelectric effect and reported on the production and reception of electromagnetic waves. The receiver in his apparatus consisted of a coil with a spark gap, where a spark would be seen upon detection of electromagnetic waves. He placed the apparatus in a darkened box to see the spark better. However, he noticed that the maximum spark length was reduced when inside the box. A glass panel placed between the source of electromagnetic waves and the receiver absorbed ultraviolet radiation that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he replaced the glass with quartz, as quartz does not absorb UV radiation.

The discoveries by Hertz led to a series of investigations by Hallwachs, Hoor, Righi and Stoletov on the effect of light, and especially of ultraviolet light, on charged bodies. Hallwachs connected a zinc plate to an electroscope. He allowed ultraviolet light to fall on a freshly cleaned zinc plate and observed that the zinc plate became uncharged if initially negatively charged, positively charged if initially uncharged, and more positively charged if initially positively charged. From these observations he concluded that some negatively charged particles were emitted by the zinc plate when exposed to ultraviolet light.

With regard to the Hertz effect, the researchers from the start showed the complexity of the phenomenon of photoelectric fatigue—the progressive diminution of the effect observed upon fresh metallic surfaces. According to Hallwachs, ozone played an important part in the phenomenon, and the emission was influenced by oxidation, humidity, and the degree of polishing of the surface. It was at the time unclear whether fatigue is absent in a vacuum.

In the period from 1888 until 1891, a detailed analysis of the photoeffect was performed by Aleksandr Stoletov with results reported in six publications. Stoletov invented a new experimental setup which was more suitable for a quantitative analysis of the photoeffect. He discovered a direct proportionality between the intensity of light and the induced photoelectric current (the first law of photoeffect or Stoletov's law). He measured the dependence of the intensity of the photo electric current on the gas pressure, where he found the existence of an optimal gas pressure corresponding to a maximum photocurrent; this property was used for the creation of solar cells.

Many substances besides metals discharge negative electricity under the action of ultraviolet light. G. C. Schmidt and O. Knoblauch compiled a list of these substances.

In 1897, J. J. Thomson investigated ultraviolet light in Crookes tubes. Thomson deduced that the ejected particles, which he called corpuscles, were of the same nature as cathode rays. These particles later became known as the electrons. Thomson enclosed a metal plate (a cathode) in a vacuum tube, and exposed it to high-frequency radiation. It was thought that the oscillating electromagnetic fields caused the atoms' field to resonate and, after reaching a certain amplitude, caused subatomic corpuscles to be emitted, and current to be detected. The amount of this current varied with the intensity and color of the radiation. Larger radiation intensity or frequency would produce more current.

During the years 1886–1902, Wilhelm Hallwachs and Philipp Lenard investigated the phenomenon of photoelectric emission in detail. Lenard observed that a current flows through an evacuated glass tube enclosing two electrodes when ultraviolet radiation falls on one of them. As soon as ultraviolet radiation is stopped, the current also stops. This initiated the concept of photoelectric emission. The discovery of the ionization of gases by ultraviolet light was made by Philipp Lenard in 1900. As the effect was produced across several centimeters of air and yielded a greater number of positive ions than negative, it was natural to interpret the phenomenon, as J. J. Thomson did, as a Hertz effect upon the particles present in the gas.

20th century

In 1902, Lenard observed that the energy of individual emitted electrons increased with the frequency (which is related to the color) of the light. This appeared to be at odds with Maxwell's wave theory of light, which predicted that the electron energy would be proportional to the intensity of the radiation.

Lenard observed the variation in electron energy with light frequency using a powerful electric arc lamp which enabled him to investigate large changes in intensity, and that had sufficient power to enable him to investigate the variation of the electrode's potential with light frequency. He found the electron energy by relating it to the maximum stopping potential (voltage) in a phototube. He found that the maximum electron kinetic energy is determined by the frequency of the light. For example, an increase in frequency results in an increase in the maximum kinetic energy calculated for an electron upon liberation – ultraviolet radiation would require a higher applied stopping potential to stop current in a phototube than blue light. However, Lenard's results were qualitative rather than quantitative because of the difficulty in performing the experiments: the experiments needed to be done on freshly cut metal so that the pure metal was observed, but it oxidized in a matter of minutes even in the partial vacuums he used. The current emitted by the surface was determined by the light's intensity, or brightness: doubling the intensity of the light doubled the number of electrons emitted from the surface.

The researches of Langevin and those of Eugene Bloch have shown that the greater part of the Lenard effect is certainly due to the Hertz effect. The Lenard effect upon the gas itself nevertheless does exist. Refound by J. J. Thomson and then more decisively by Frederic Palmer, Jr., the gas photoemission was studied and showed very different characteristics than those at first attributed to it by Lenard.

In 1900, while studying black-body radiation, the German physicist Max Planck suggested in his "On the Law of Distribution of Energy in the Normal Spectrum" paper that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called the Planck constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a key step in the development of quantum mechanics. In 1914, Robert A. Millikan's highly accurate measurements of the Planck constant from the photoelectric effect supported Einstein's model, even though a corpuscular theory of light was for Millikan, at the time, "quite unthinkable". Einstein was awarded the 1921 Nobel Prize in Physics for "his discovery of the law of the photoelectric effect", and Millikan was awarded the Nobel Prize in 1923 for "his work on the elementary charge of electricity and on the photoelectric effect". In quantum perturbation theory of atoms and solids acted upon by electromagnetic radiation, the photoelectric effect is still commonly analyzed in terms of waves; the two approaches are equivalent because photon or wave absorption can only happen between quantized energy levels whose energy difference is that of the energy of photon.

Albert Einstein's mathematical description of how the photoelectric effect was caused by absorption of quanta of light was in one of his Annus Mirabilis papers, named "On a Heuristic Viewpoint Concerning the Production and Transformation of Light". The paper proposed a simple description of light quanta, or photons, and showed how they explained such phenomena as the photoelectric effect. His simple explanation in terms of absorption of discrete quanta of light agreed with experimental results. It explained why the energy of photoelectrons was dependent only on the frequency of the incident light and not on its intensity: at low-intensity, the high-frequency source could supply a few high energy photons, whereas at high-intensity, the low-frequency source would supply no photons of sufficient individual energy to dislodge any electrons. This was an enormous theoretical leap, but the concept was strongly resisted at first because it contradicted the wave theory of light that followed naturally from James Clerk Maxwell's equations of electromagnetism, and more generally, the assumption of infinite divisibility of energy in physical systems. Even after experiments showed that Einstein's equations for the photoelectric effect were accurate, resistance to the idea of photons continued.

Einstein's work predicted that the energy of individual ejected electrons increases linearly with the frequency of the light. Perhaps surprisingly, the precise relationship had not at that time been tested. By 1905 it was known that the energy of photoelectrons increases with increasing frequency of incident light and is independent of the intensity of the light. However, the manner of the increase was not experimentally determined until 1914 when Millikan showed that Einstein's prediction was correct.

The photoelectric effect helped to propel the then-emerging concept of wave–particle duality in the nature of light. Light simultaneously possesses the characteristics of both waves and particles, each being manifested according to the circumstances. The effect was impossible to understand in terms of the classical wave description of light, as the energy of the emitted electrons did not depend on the intensity of the incident radiation. Classical theory predicted that the electrons would 'gather up' energy over a period of time, and then be emitted.

Uses and effects

Photomultipliers

Photomultiplier

These are extremely light-sensitive vacuum tubes with a coated photocathode inside the envelope. The photo cathode contains combinations of materials such as cesium, rubidium, and antimony specially selected to provide a low work function, so when illuminated even by very low levels of light, the photocathode readily releases electrons. By means of a series of electrodes (dynodes) at ever-higher potentials, these electrons are accelerated and substantially increased in number through secondary emission to provide a readily detectable output current. Photomultipliers are still commonly used wherever low levels of light must be detected.

Image sensors

Video camera tubes in the early days of television used the photoelectric effect, for example, Philo Farnsworth's "Image dissector" used a screen charged by the photoelectric effect to transform an optical image into a scanned electronic signal.

Photoelectron spectroscopy

Angle-resolved photoemission spectroscopy (ARPES) experiment. Helium discharge lamp shines ultraviolet light onto the sample in ultra-high vacuum. Hemispherical electron analyzer measures the distribution of ejected electrons with respect to energy and momentum.

Because the kinetic energy of the emitted electrons is exactly the energy of the incident photon minus the energy of the electron's binding within an atom, molecule or solid, the binding energy can be determined by shining a monochromatic X-ray or UV light of a known energy and measuring the kinetic energies of the photoelectrons. The distribution of electron energies is valuable for studying quantum properties of these systems. It can also be used to determine the elemental composition of the samples. For solids, the kinetic energy and emission angle distribution of the photoelectrons is measured for the complete determination of the electronic band structure in terms of the allowed binding energies and momenta of the electrons. Modern instruments for angle-resolved photoemission spectroscopy are capable of measuring these quantities with a precision better than 1 meV and 0.1°.

Photoelectron spectroscopy measurements are usually performed in a high-vacuum environment, because the electrons would be scattered by gas molecules if they were present. However, some companies are now selling products that allow photoemission in air. The light source can be a laser, a discharge tube, or a synchrotron radiation source.

The concentric hemispherical analyzer is a typical electron energy analyzer. It uses an electric field between two hemispheres to change (disperse) the trajectories of incident electrons depending on their kinetic energies.

Night vision devices

Photons hitting a thin film of alkali metal or semiconductor material such as gallium arsenide in an image intensifier tube cause the ejection of photoelectrons due to the photoelectric effect. These are accelerated by an electrostatic field where they strike a phosphor coated screen, converting the electrons back into photons. Intensification of the signal is achieved either through acceleration of the electrons or by increasing the number of electrons through secondary emissions, such as with a micro-channel plate. Sometimes a combination of both methods is used. Additional kinetic energy is required to move an electron out of the conduction band and into the vacuum level. This is known as the electron affinity of the photocathode and is another barrier to photoemission other than the forbidden band, explained by the band gap model. Some materials such as gallium arsenide have an effective electron affinity that is below the level of the conduction band. In these materials, electrons that move to the conduction band all have sufficient energy to be emitted from the material, so the film that absorbs photons can be quite thick. These materials are known as negative electron affinity materials.

Spacecraft

The photoelectric effect will cause spacecraft exposed to sunlight to develop a positive charge. This can be a major problem, as other parts of the spacecraft are in shadow which will result in the spacecraft developing a negative charge from nearby plasmas. The imbalance can discharge through delicate electrical components. The static charge created by the photoelectric effect is self-limiting, because a higher charged object doesn't give up its electrons as easily as a lower charged object does.

Moon dust

Light from the Sun hitting lunar dust causes it to become positively charged from the photoelectric effect. The charged dust then repels itself and lifts off the surface of the Moon by electrostatic levitation. This manifests itself almost like an "atmosphere of dust", visible as a thin haze and blurring of distant features, and visible as a dim glow after the sun has set. This was first photographed by the Surveyor program probes in the 1960s, and most recently the Chang'e 3 rover observed dust deposition on lunar rocks as high as about 28 cm. It is thought that the smallest particles are repelled kilometers from the surface and that the particles move in "fountains" as they charge and discharge.

Competing processes and photoemission cross section

When photon energies are as high as the electron rest energy of 511 keV, yet another process, the Compton scattering, may take place. Above twice this energy, at 1.022 MeV pair production is also more likely. Compton scattering and pair production are examples of two other competing mechanisms.

Even if the photoelectric effect is the favoured reaction for a particular interaction of a single photon with a bound electron, the result is also subject to quantum statistics and is not guaranteed. The probability of the photoelectric effect occurring is measured by the cross section of the interaction, σ. This has been found to be a function of the atomic number of the target atom and photon energy. In a crude approximation, for photon energies above the highest atomic binding energy, the cross section is given by:

Here Z is the atomic number and n is a number which varies between 4 and 5. The photoelectric effect rapidly decreases in significance in the gamma-ray region of the spectrum, with increasing photon energy. It is also more likely from elements with high atomic number. Consequently, high-Z materials make good gamma-ray shields, which is the principal reason why lead (Z = 82) is preferred and most widely used.

Neuroscience of music

From Wikipedia, the free encyclopedia

The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.

The cognitive neuroscience of music represents a significant branch of music psychology, and is distinguished from related fields such as cognitive musicology in its reliance on direct observations of the brain and use of brain imaging techniques like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).

Elements of music

Pitch

Sounds consist of waves of air molecules that vibrate at different frequencies. These waves travel to the basilar membrane in the cochlea of the inner ear. Different frequencies of sound will cause vibrations in different location of the basilar membrane. We are able to hear different pitches because each sound wave with a unique frequency is correlated to a different location along the basilar membrane. This spatial arrangement of sounds and their respective frequencies being processed in the basilar membrane is known as tonotopy. When the hair cells on the basilar membrane move back and forth due to the vibrating sound waves, they release neurotransmitters and cause action potentials to occur down the auditory nerve. The auditory nerve then leads to several layers of synapses at numerous clusters of neurons, or nuclei, in the auditory brainstem. These nuclei are also tonotopically organized, and the process of achieving this tonotopy after the cochlea is not well understood. This tonotopy is in general maintained up to primary auditory cortex in mammals.

A widely postulated mechanism for pitch processing in the early central auditory system is the phase-locking and mode-locking of action potentials to frequencies in a stimulus. Phase-locking to stimulus frequencies has been shown in the auditory nerve, the cochlear nucleus, the inferior colliculus, and the auditory thalamus. By phase- and mode-locking in this way, the auditory brainstem is known to preserve a good deal of the temporal and low-passed frequency information from the original sound; this is evident by measuring the auditory brainstem response using EEG. This temporal preservation is one way to argue directly for the temporal theory of pitch perception, and to argue indirectly against the place theory of pitch perception.

The primary auditory cortex is one of the main areas associated with superior pitch resolution.

The right secondary auditory cortex has finer pitch resolution than the left. Hyde, Peretz and Zatorre (2008) used functional magnetic resonance imaging (fMRI) in their study to test the involvement of right and left auditory cortical regions in the frequency processing of melodic sequences. As well as finding superior pitch resolution in the right secondary auditory cortex, specific areas found to be involved were the planum temporale (PT) in the secondary auditory cortex, and the primary auditory cortex in the medial section of Heschl's gyrus (HG).

Many neuroimaging studies have found evidence of the importance of right secondary auditory regions in aspects of musical pitch processing, such as melody. Many of these studies such as one by Patterson, Uppenkamp, Johnsrude and Griffiths (2002) also find evidence of a hierarchy of pitch processing. Patterson et al. (2002) used spectrally matched sounds which produced: no pitch, fixed pitch or melody in an fMRI study and found that all conditions activated HG and PT. Sounds with pitch activated more of these regions than sounds without. When a melody was produced activation spread to the superior temporal gyrus (STG) and planum polare (PP). These results support the existence of a pitch processing hierarchy.

Absolute pitch

Musicians possessing perfect pitch can identify the pitch of musical tones without external reference.
 

Absolute pitch (AP) is defined as the ability to identify the pitch of a musical tone or to produce a musical tone at a given pitch without the use of an external reference pitch. Neuroscientific research has not discovered a distinct activation pattern common for possessors of AP. Zatorre, Perry, Beckett, Westbury and Evans (1998) examined the neural foundations of AP using functional and structural brain imaging techniques. Positron emission tomography (PET) was utilized to measure cerebral blood flow (CBF) in musicians possessing AP and musicians lacking AP. When presented with musical tones, similar patterns of increased CBF in auditory cortical areas emerged in both groups. AP possessors and non-AP subjects demonstrated similar patterns of left dorsolateral frontal activity when they performed relative pitch judgments. However, in non-AP subjects activation in the right inferior frontal cortex was present whereas AP possessors showed no such activity. This finding suggests that musicians with AP do not need access to working memory devices for such tasks. These findings imply that there is no specific regional activation pattern unique to AP. Rather, the availability of specific processing mechanisms and task demands determine the recruited neural areas.

Melody

Studies suggest that individuals are capable of automatically detecting a difference or anomaly in a melody such as an out of tune pitch which does not fit with their previous music experience. This automatic processing occurs in the secondary auditory cortex. Brattico, Tervaniemi, Naatanen, and Peretz (2006) performed one such study to determine if the detection of tones that do not fit an individual's expectations can occur automatically. They recorded event-related potentials (ERPs) in nonmusicians as they were presented unfamiliar melodies with either an out of tune pitch or an out of key pitch while participants were either distracted from the sounds or attending to the melody. Both conditions revealed an early frontal error-related negativity independent of where attention was directed. This negativity originated in the auditory cortex, more precisely in the supratemporal lobe (which corresponds with the secondary auditory cortex) with greater activity from the right hemisphere. The negativity response was larger for pitch that was out of tune than that which was out of key. Ratings of musical incongruity were higher for out of tune pitch melodies than for out of key pitch. In the focused attention condition, out of key and out of tune pitches produced late parietal positivity. The findings of Brattico et al. (2006) suggest that there is automatic and rapid processing of melodic properties in the secondary auditory cortex. The findings that pitch incongruities were detected automatically, even in processing unfamiliar melodies, suggests that there is an automatic comparison of incoming information with long term knowledge of musical scale properties, such as culturally influenced rules of musical properties (common chord progressions, scale patterns, etc.) and individual expectations of how the melody should proceed.

Rhythm

The belt and parabelt areas of the right hemisphere are involved in processing rhythm. Rhythm is a strong repeated pattern of movement or sound. When individuals are preparing to tap out a rhythm of regular intervals (1:2 or 1:3) the left frontal cortex, left parietal cortex, and right cerebellum are all activated. With more difficult rhythms such as a 1:2.5, more areas in the cerebral cortex and cerebellum are involved. EEG recordings have also shown a relationship between brain electrical activity and rhythm perception. Snyder and Large (2005) performed a study examining rhythm perception in human subjects, finding that activity in the gamma band (20 – 60 Hz) corresponds to the beats in a simple rhythm. Two types of gamma activity were found by Snyder & Large: induced gamma activity, and evoked gamma activity. Evoked gamma activity was found after the onset of each tone in the rhythm; this activity was found to be phase-locked (peaks and troughs were directly related to the exact onset of the tone) and did not appear when a gap (missed beat) was present in the rhythm. Induced gamma activity, which was not found to be phase-locked, was also found to correspond with each beat. However, induced gamma activity did not subside when a gap was present in the rhythm, indicating that induced gamma activity may possibly serve as a sort of internal metronome independent of auditory input.

Tonality

Tonality describes the relationships between the elements of melody and harmony – tones, intervals, chords, and scales. These relationships are often characterized as hierarchical, such that one of the elements dominates or attracts another. They occur both within and between every type of element, creating a rich and time-varying perception between tones and their melodic, harmonic, and chromatic contexts. In one conventional sense, tonality refers to just the major and minor scale types – examples of scales whose elements are capable of maintaining a consistent set of functional relationships. The most important functional relationship is that of the tonic note (the first note in a scale) and the tonic chord (the first note in the scale with the third and fifth note) with the rest of the scale. The tonic is the element which tends to assert its dominance and attraction over all others, and it functions as the ultimate point of attraction, rest and resolution for the scale.

The right auditory cortex is primarily involved in perceiving pitch, and parts of harmony, melody and rhythm. One study by Petr Janata found that there are tonality-sensitive areas in the medial prefrontal cortex, the cerebellum, the superior temporal sulci of both hemispheres and the superior temporal gyri (which has a skew towards the right hemisphere). Hemispheric asymmetries in the processing of dissonant/consonant sounds have been demonstrated. ERP studies have shown larger evoked responses over the left temporal area in response to dissonant chords, and over the right one, in response to consonant chords.

Music production and performance

Motor control functions

Musical performance usually involves at least three elementary motor control functions: timing, sequencing, and spatial organization of motor movements. Accuracy in timing of movements is related to musical rhythm. Rhythm, the pattern of temporal intervals within a musical measure or phrase, in turn creates the perception of stronger and weaker beats. Sequencing and spatial organization relate to the expression of individual notes on a musical instrument.

These functions and their neural mechanisms have been investigated separately in many studies, but little is known about their combined interaction in producing a complex musical performance. The study of music requires examining them together.

Timing

Although neural mechanisms involved in timing movement have been studied rigorously over the past 20 years, much remains controversial. The ability to phrase movements in precise time has been accredited to a neural metronome or clock mechanism where time is represented through oscillations or pulses. An opposing view to this metronome mechanism has also been hypothesized stating that it is an emergent property of the kinematics of movement itself. Kinematics is defined as parameters of movement through space without reference to forces (for example, direction, velocity and acceleration).

Functional neuroimaging studies, as well as studies of brain-damaged patients, have linked movement timing to several cortical and sub-cortical regions, including the cerebellum, basal ganglia and supplementary motor area (SMA). Specifically the basal ganglia and possibly the SMA have been implicated in interval timing at longer timescales (1 second and above), while the cerebellum may be more important for controlling motor timing at shorter timescales (milliseconds). Furthermore, these results indicate that motor timing is not controlled by a single brain region, but by a network of regions that control specific parameters of movement and that depend on the relevant timescale of the rhythmic sequence.

Sequencing

Motor sequencing has been explored in terms of either the ordering of individual movements, such as finger sequences for key presses, or the coordination of subcomponents of complex multi-joint movements. Implicated in this process are various cortical and sub-cortical regions, including the basal ganglia, the SMA and the pre-SMA, the cerebellum, and the premotor and prefrontal cortices, all involved in the production and learning of motor sequences but without explicit evidence of their specific contributions or interactions amongst one another. In animals, neurophysiological studies have demonstrated an interaction between the frontal cortex and the basal ganglia during the learning of movement sequences. Human neuroimaging studies have also emphasized the contribution of the basal ganglia for well-learned sequences.

The cerebellum is arguably important for sequence learning and for the integration of individual movements into unified sequences, while the pre-SMA and SMA have been shown to be involved in organizing or chunking of more complex movement sequences. Chunking, defined as the re-organization or re-grouping of movement sequences into smaller sub-sequences during performance, is thought to facilitate the smooth performance of complex movements and to improve motor memory. Lastly, the premotor cortex has been shown to be involved in tasks that require the production of relatively complex sequences, and it may contribute to motor prediction.

Spatial organization

Few studies of complex motor control have distinguished between sequential and spatial organization, yet expert musical performances demand not only precise sequencing but also spatial organization of movements. Studies in animals and humans have established the involvement of parietal, sensory–motor and premotor cortices in the control of movements, when the integration of spatial, sensory and motor information is required. Few studies so far have explicitly examined the role of spatial processing in the context of musical tasks.

Auditory-motor interactions

Feedforward and feedback interactions

An auditory–motor interaction may be loosely defined as any engagement of or communication between the two systems. Two classes of auditory-motor interaction are "feedforward" and "feedback". In feedforward interactions, it is the auditory system that predominately influences the motor output, often in a predictive way. An example is the phenomenon of tapping to the beat, where the listener anticipates the rhythmic accents in a piece of music. Another example is the effect of music on movement disorders: rhythmic auditory stimuli have been shown to improve walking ability in Parkinson's disease and stroke patients.

Feedback interactions are particularly relevant in playing an instrument such as a violin, or in singing, where pitch is variable and must be continuously controlled. If auditory feedback is blocked, musicians can still execute well-rehearsed pieces, but expressive aspects of performance are affected. When auditory feedback is experimentally manipulated by delays or distortions, motor performance is significantly altered: asynchronous feedback disrupts the timing of events, whereas alteration of pitch information disrupts the selection of appropriate actions, but not their timing. This suggests that disruptions occur because both actions and percepts depend on a single underlying mental representation.

Models of auditory–motor interactions

Several models of auditory–motor interactions have been advanced. The model of Hickok and Poeppel, which is specific for speech processing, proposes that a ventral auditory stream maps sounds onto meaning, whereas a dorsal stream maps sounds onto articulatory representations. They and others suggest that posterior auditory regions at the parieto-temporal boundary are crucial parts of the auditory–motor interface, mapping auditory representations onto motor representations of speech, and onto melodies.

Mirror/echo neurons and auditory–motor interactions

The mirror neuron system has an important role in neural models of sensory–motor integration. There is considerable evidence that neurons respond to both actions and the accumulated observation of actions. A system proposed to explain this understanding of actions is that visual representations of actions are mapped onto our own motor system.

Some mirror neurons are activated both by the observation of goal-directed actions, and by the associated sounds produced during the action. This suggests that the auditory modality can access the motor system. While these auditory–motor interactions have mainly been studied for speech processes, and have focused on Broca's area and the vPMC, as of 2011, experiments have begun to shed light on how these interactions are needed for musical performance. Results point to a broader involvement of the dPMC and other motor areas. The literature has shown a highly specialized cortical network in the skilled musician's brain that codes the relationship between musical gestures and their corresponding sounds. The data hint at the existence of an audiomotor mirror network involving the right superior temporal gyrus, the premotor cortex, the inferior frontal and inferior parietal areas, among other areas.  

Music and language

Certain aspects of language and melody have been shown to be processed in near identical functional brain areas. Brown, Martinez and Parsons (2006) examined the neurological structural similarities between music and language. Utilizing positron emission tomography (PET), the findings showed that both linguistic and melodic phrases produced activation in almost identical functional brain areas. These areas included the primary motor cortex, supplementary motor area, Broca's area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus and posterior cerebellum. Differences were found in lateralization tendencies as language tasks favoured the left hemisphere, but the majority of activations were bilateral which produced significant overlap across modalities.

Syntactical information mechanisms in both music and language have been shown to be processed similarly in the brain. Jentschke, Koelsch, Sallat and Friederici (2008) conducted a study investigating the processing of music in children with specific language impairments (SLI). Children with typical language development (TLD) showed ERP patterns different from those of children with SLI, which reflected their challenges in processing music-syntactic regularities. Strong correlations between the ERAN (Early Right Anterior Negativity—a specific ERP measure) amplitude and linguistic and musical abilities provide additional evidence for the relationship of syntactical processing in music and language.

However, production of melody and production of speech may be subserved by different neural networks. Stewart, Walsh, Frith and Rothwell (2001) studied the differences between speech production and song production using transcranial magnetic stimulation (TMS). Stewart et al. found that TMS applied to the left frontal lobe disturbs speech but not melody supporting the idea that they are subserved by different areas of the brain. The authors suggest that a reason for the difference is that speech generation can be localized well but the underlying mechanisms of melodic production cannot. Alternatively, it was also suggested that speech production may be less robust than melodic production and thus more susceptible to interference.

Language processing is a function more of the left side of the brain than the right side, particularly Broca's area and Wernicke's area, though the roles played by the two sides of the brain in processing different aspects of language are still unclear. Music is also processed by both the left and the right sides of the brain. Recent evidence further suggest shared processing between language and music at the conceptual level. It has also been found that, among music conservatory students, the prevalence of absolute pitch is much higher for speakers of tone language, even controlling for ethnic background, showing that language influences how musical tones are perceived.

Musician vs. non-musician processing

Professional pianists show less cortical activation for complex finger movement tasks due to structural differences in the brain.

Differences

Brain structure within musicians and non-musicians is distinctly different. Gaser and Schlaug (2003) compared brain structures of professional musicians with non-musicians and discovered gray matter volume differences in motor, auditory and visual-spatial brain regions. Specifically, positive correlations were discovered between musician status (professional, amateur and non-musician) and gray matter volume in the primary motor and somatosensory areas, premotor areas, anterior superior parietal areas and in the inferior temporal gyrus bilaterally. This strong association between musician status and gray matter differences supports the notion that musicians' brains show use-dependent structural changes. Due to the distinct differences in several brain regions, it is unlikely that these differences are innate but rather due to the long-term acquisition and repetitive rehearsal of musical skills.

Brains of musicians also show functional differences from those of non-musicians. Krings, Topper, Foltys, Erberich, Sparing, Willmes and Thron (2000) utilized fMRI to study brain area involvement of professional pianists and a control group while performing complex finger movements. Krings et al. found that the professional piano players showed lower levels of cortical activation in motor areas of the brain. It was concluded that a lesser amount of neurons needed to be activated for the piano players due to long-term motor practice which results in the different cortical activation patterns. Koeneke, Lutz, Wustenberg and Jancke (2004) reported similar findings in keyboard players. Skilled keyboard players and a control group performed complex tasks involving unimanual and bimanual finger movements. During task conditions, strong hemodynamic responses in the cerebellum were shown by both non-musicians and keyboard players, but non-musicians showed the stronger response. This finding indicates that different cortical activation patterns emerge from long-term motor practice. This evidence supports previous data showing that musicians require fewer neurons to perform the same movements.

Musicians have been shown to have significantly more developed left planum temporales, and have also shown to have a greater word memory. Chan's study controlled for age, grade point average and years of education and found that when given a 16 word memory test, the musicians averaged one to two more words above their non musical counterparts.

Similarities

Studies have shown that the human brain has an implicit musical ability. Koelsch, Gunter, Friederici and Schoger (2000) investigated the influence of preceding musical context, task relevance of unexpected chords and the degree of probability of violation on music processing in both musicians and non-musicians. Findings showed that the human brain unintentionally extrapolates expectations about impending auditory input. Even in non-musicians, the extrapolated expectations are consistent with music theory. The ability to process information musically supports the idea of an implicit musical ability in the human brain. In a follow-up study, Koelsch, Schroger, and Gunter (2002) investigated whether ERAN and N5 could be evoked preattentively in non-musicians. Findings showed that both ERAN and N5 can be elicited even in a situation where the musical stimulus is ignored by the listener indicating that there is a highly differentiated preattentive musicality in the human brain.

Gender differences

Minor neurological differences regarding hemispheric processing exist between brains of males and females. Koelsch, Maess, Grossmann and Friederici (2003) investigated music processing through EEG and ERPs and discovered gender differences. Findings showed that females process music information bilaterally and males process music with a right-hemispheric predominance. However, the early negativity of males was also present over the left hemisphere. This indicates that males do not exclusively utilize the right hemisphere for musical information processing. In a follow-up study, Koelsch, Grossman, Gunter, Hahne, Schroger and Friederici (2003) found that boys show lateralization of the early anterior negativity in the left hemisphere but found a bilateral effect in girls. This indicates a developmental effect as early negativity is lateralized in the right hemisphere in men and in the left hemisphere in boys.

Handedness differences

It has been found that subjects who are lefthanded, particularly those who are also ambidextrous, perform better than righthanders on short term memory for the pitch. It was hypothesized that this handedness advantage is due to the fact that lefthanders have more duplication of storage in the two hemispheres than do righthanders. Other work has shown that there are pronounced differences between righthanders and lefthanders (on a statistical basis) in how musical patterns are perceived, when sounds come from different regions of space. This has been found, for example, in the Octave illusion and the Scale illusion.

Musical imagery

Musical imagery refers to the experience of replaying music by imagining it inside the head. Musicians show a superior ability for musical imagery due to intense musical training. Herholz, Lappe, Knief and Pantev (2008) investigated the differences in neural processing of a musical imagery task in musicians and non-musicians. Utilizing magnetoencephalography (MEG), Herholz et al. examined differences in the processing of a musical imagery task with familiar melodies in musicians and non-musicians. Specifically, the study examined whether the mismatch negativity (MMN) can be based solely on imagery of sounds. The task involved participants listening to the beginning of a melody, continuation of the melody in his/her head and finally hearing a correct/incorrect tone as further continuation of the melody. The imagery of these melodies was strong enough to obtain an early preattentive brain response to unanticipated violations of the imagined melodies in the musicians. These results indicate similar neural correlates are relied upon for trained musicians imagery and perception. Additionally, the findings suggest that modification of the imagery mismatch negativity (iMMN) through intense musical training results in achievement of a superior ability for imagery and preattentive processing of music.

Perceptual musical processes and musical imagery may share a neural substrate in the brain. A PET study conducted by Zatorre, Halpern, Perry, Meyer and Evans (1996) investigated cerebral blood flow (CBF) changes related to auditory imagery and perceptual tasks. These tasks examined the involvement of particular anatomical regions as well as functional commonalities between perceptual processes and imagery. Similar patterns of CBF changes provided evidence supporting the notion that imagery processes share a substantial neural substrate with related perceptual processes. Bilateral neural activity in the secondary auditory cortex was associated with both perceiving and imagining songs. This implies that within the secondary auditory cortex, processes underlie the phenomenological impression of imagined sounds. The supplementary motor area (SMA) was active in both imagery and perceptual tasks suggesting covert vocalization as an element of musical imagery. CBF increases in the inferior frontal polar cortex and right thalamus suggest that these regions may be related to retrieval and/or generation of auditory information from memory.

Emotion

Music is able to create an intensely pleasurable experience that can be described as "chills". Blood and Zatorre (2001) used PET to measure changes in cerebral blood flow while participants listened to music that they knew to give them the "chills" or any sort of intensely pleasant emotional response. They found that as these chills increase, many changes in cerebral blood flow are seen in brain regions such as the amygdala, orbitofrontal cortex, ventral striatum, midbrain, and the ventral medial prefrontal cortex. Many of these areas appear to be linked to reward, motivation, emotion, and arousal, and are also activated in other pleasurable situations. The resulting pleasure responses enable the release dopamine, serotonin, and oxytocin. Nucleus accumbens (a part of striatum) is involved in both music related emotions, as well as rhythmic timing.

According to the National Institute of Health, children and adults who are suffering from emotional trauma have been able to benefit from the use of music in a variety of ways. The use of music has been essential in helping children who struggle with focus, anxiety, and cognitive function by using music in therapeutic way. Music therapy has also helped children cope with autism, pediatric cancer, and pain from treatments.

Emotions induced by music activate similar frontal brain regions compared to emotions elicited by other stimuli. Schmidt and Trainor (2001) discovered that valence (i.e. positive vs. negative) of musical segments was distinguished by patterns of frontal EEG activity. Joyful and happy musical segments were associated with increases in left frontal EEG activity whereas fearful and sad musical segments were associated with increases in right frontal EEG activity. Additionally, the intensity of emotions was differentiated by the pattern of overall frontal EEG activity. Overall frontal region activity increased as affective musical stimuli became more intense.

When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain. The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingulate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.

Memory

Neuropsychology of musical memory

Musical memory involves both explicit and implicit memory systems. Explicit musical memory is further differentiated between episodic (where, when and what of the musical experience) and semantic (memory for music knowledge including facts and emotional concepts). Implicit memory centers on the 'how' of music and involves automatic processes such as procedural memory and motor skill learning – in other words skills critical for playing an instrument. Samson and Baird (2009) found that the ability of musicians with Alzheimer's Disease to play an instrument (implicit procedural memory) may be preserved.

Neural correlates of musical memory

A PET study looking into the neural correlates of musical semantic and episodic memory found distinct activation patterns. Semantic musical memory involves the sense of familiarity of songs. The semantic memory for music condition resulted in bilateral activation in the medial and orbital frontal cortex, as well as activation in the left angular gyrus and the left anterior region of the middle temporal gyri. These patterns support the functional asymmetry favouring the left hemisphere for semantic memory. Left anterior temporal and inferior frontal regions that were activated in the musical semantic memory task produced activation peaks specifically during the presentation of musical material, suggestion that these regions are somewhat functionally specialized for musical semantic representations.

Episodic memory of musical information involves the ability to recall the former context associated with a musical excerpt. In the condition invoking episodic memory for music, activations were found bilaterally in the middle and superior frontal gyri and precuneus, with activation predominant in the right hemisphere. Other studies have found the precuneus to become activated in successful episodic recall. As it was activated in the familiar memory condition of episodic memory, this activation may be explained by the successful recall of the melody.

When it comes to memory for pitch, there appears to be a dynamic and distributed brain network subserves pitch memory processes. Gaab, Gaser, Zaehle, Jancke and Schlaug (2003) examined the functional anatomy of pitch memory using functional magnetic resonance imaging (fMRI). An analysis of performance scores in a pitch memory task resulted in a significant correlation between good task performance and the supramarginal gyrus (SMG) as well as the dorsolateral cerebellum. Findings indicate that the dorsolateral cerebellum may act as a pitch discrimination processor and the SMG may act as a short-term pitch information storage site. The left hemisphere was found to be more prominent in the pitch memory task than the right hemispheric regions.

Therapeutic effects of music on memory

Musical training has been shown to aid memory. Altenmuller et al. studied the difference between active and passive musical instruction and found both that over a longer (but not short) period of time, the actively taught students retained much more information than the passively taught students. The actively taught students were also found to have greater cerebral cortex activation. The passively taught students weren't wasting their time; they, along with the active group, displayed greater left hemisphere activity, which is typical in trained musicians.

Research suggests we listen to the same songs repeatedly because of musical nostalgia. One major study, published in the journal Memory & Cognition, found that music enables the mind to evoke memories of the past.

Attention

Treder et al. identified neural correlates of attention when listening to simplified polyphonic music patterns. In a musical oddball experiment, they had participants shift selective attention to one out of three different instruments in music audio clips, with each instrument occasionally playing one or several notes deviating from an otherwise repetitive pattern. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument could be classified offline with high accuracy. This indicates that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for building more ergonomic music-listing based brain-computer interfaces.

Development

Musical four-year-olds have been found to have one greater left hemisphere intrahemispheric coherence.[84] Musicians have been found to have more developed anterior portions of the corpus callosum in a study by Cowell et al. in 1992. This was confirmed by a study by Schlaug et al. in 1995 that found that classical musicians between the ages of 21 and 36 have significantly greater anterior corpora callosa than the non-musical control. Schlaug also found that there was a strong correlation of musical exposure before the age of seven, and a great increase in the size of the corpus callosum. These fibers join together the left and right hemispheres and indicate an increased relaying between both sides of the brain. This suggests the merging between the spatial- emotiono-tonal processing of the right brain and the linguistical processing of the left brain. This large relaying across many different areas of the brain might contribute to music's ability to aid in memory function.

Impairment

Focal hand dystonia

Focal hand dystonia is a task-related movement disorder associated with occupational activities that require repetitive hand movements. Focal hand dystonia is associated with abnormal processing in the premotor and primary sensorimotor cortices. An fMRI study examined five guitarists with focal hand dystonia. The study reproduced task-specific hand dystonia by having guitarists use a real guitar neck inside the scanner as well as performing a guitar exercise to trigger abnormal hand movement. The dystonic guitarists showed significantly more activation of the contralateral primary sensorimotor cortex as well as a bilateral underactivation of premotor areas. This activation pattern represents abnormal recruitment of the cortical areas involved in motor control. Even in professional musicians, widespread bilateral cortical region involvement is necessary to produce complex hand movements such as scales and arpeggios. The abnormal shift from premotor to primary sensorimotor activation directly correlates with guitar-induced hand dystonia.

Music agnosia

Music agnosia, an auditory agnosia, is a syndrome of selective impairment in music recognition. Three cases of music agnosia are examined by Dalla Bella and Peretz (1999); C.N., G.L., and I.R.. All three of these patients suffered bilateral damage to the auditory cortex which resulted in musical difficulties while speech understanding remained intact. Their impairment is specific to the recognition of once familiar melodies. They are spared in recognizing environmental sounds and in recognizing lyrics. Peretz (1996) has studied C.N.'s music agnosia further and reports an initial impairment of pitch processing and spared temporal processing. C.N. later recovered in pitch processing abilities but remained impaired in tune recognition and familiarity judgments.

Musical agnosias may be categorized based on the process which is impaired in the individual. Apperceptive music agnosia involves an impairment at the level of perceptual analysis involving an inability to encode musical information correctly. Associative music agnosia reflects an impaired representational system which disrupts music recognition. Many of the cases of music agnosia have resulted from surgery involving the middle cerebral artery. Patient studies have surmounted a large amount of evidence demonstrating that the left side of the brain is more suitable for holding long-term memory representations of music and that the right side is important for controlling access to these representations. Associative music agnosias tend to be produced by damage to the left hemisphere, while apperceptive music agnosia reflects damage to the right hemisphere.

Congenital amusia

Congenital amusia, otherwise known as tone deafness, is a term for lifelong musical problems which are not attributable to mental retardation, lack of exposure to music or deafness, or brain damage after birth. Amusic brains have been found in fMRI studies to have less white matter and thicker cortex than controls in the right inferior frontal cortex. These differences suggest abnormal neuronal development in the auditory cortex and inferior frontal gyrus, two areas which are important in musical-pitch processing.

Studies on those with amusia suggest different processes are involved in speech tonality and musical tonality. Congenital amusics lack the ability to distinguish between pitches and so are for example unmoved by dissonance and playing the wrong key on a piano. They also cannot be taught to remember a melody or to recite a song; however, they are still capable of hearing the intonation of speech, for example, distinguishing between "You speak French" and "You speak French?" when spoken.

Amygdala damage

Damage to the amygdala may impair recognition of scary music.

Damage to the amygdala has selective emotional impairments on musical recognition. Gosselin, Peretz, Johnsen and Adolphs (2007) studied S.M., a patient with bilateral damage of the amygdala with the rest of the temporal lobe undamaged and found that S.M. was impaired in recognition of scary and sad music. S.M.'s perception of happy music was normal, as was her ability to use cues such as tempo to distinguish between happy and sad music. It appears that damage specific to the amygdala can selectively impair recognition of scary music.

Selective deficit in music reading

Specific musical impairments may result from brain damage leaving other musical abilities intact. Cappelletti, Waley-Cohen, Butterworth and Kopelman (2000) studied a single case study of patient P.K.C., a professional musician who sustained damage to the left posterior temporal lobe as well as a small right occipitotemporal lesion. After sustaining damage to these regions, P.K.C. was selectively impaired in the areas of reading, writing and understanding musical notation but maintained other musical skills. The ability to read aloud letters, words, numbers and symbols (including musical ones) was retained. However, P.K.C. was unable to read aloud musical notes on the staff regardless of whether the task involved naming with the conventional letter or by singing or playing. Yet despite this specific deficit, P.K.C. retained the ability to remember and play familiar and new melodies.

Auditory arrhythmia

Arrhythmia in the auditory modality is defined as a disturbance of rhythmic sense; and includes deficits such as the inability to rhythmically perform music, the inability to keep time to music and the inability to discriminate between or reproduce rhythmic patterns. A study investigating the elements of rhythmic function examined Patient H.J., who acquired arrhythmia after sustaining a right temporoparietal infarct. Damage to this region impaired H.J.'s central timing system which is essentially the basis of his global rhythmic impairment. H.J. was unable to generate steady pulses in a tapping task. These findings suggest that keeping a musical beat relies on functioning in the right temporal auditory cortex.

Copper in biology

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cop...