Search This Blog

Friday, May 24, 2019

Neuroscience of music

From Wikipedia, the free encyclopedia

The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.

The cognitive neuroscience of music represents a significant branch of music psychology, and is distinguished from related fields such as cognitive musicology in its reliance on direct observations of the brain and use of such techniques as functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), magnetoencephalography (MEG), electroencephalography (EEG), and positron emission tomography (PET).

Neurological bases

Auditory Pathway

Children who study music have shown increased development in the auditory pathway after only two years. The development could accelerate language and reading development.

Pitch

Successive parts of the tonotopically organized basilar membrane in the cochlea resonate to corresponding frequency bandwidths of incoming sound. The hair cells in the cochlea release neurotransmitter as a result, causing action potentials down the auditory nerve. The auditory nerve then leads to several layers of synapses at numerous nuclei in the auditory brainstem. These nuclei are also tonotopically organized, and the process of achieving this tonotopy after the cochlea is not well understood. This tonotopy is in general maintained up to primary auditory cortex in mammals, however it is often found that cells in primary and non-primary auditory cortex have spatio-temporal receptive fields, rather than being strictly responsive or phase-locking their action potentials to narrow frequency regions.

A widely postulated mechanism for pitch processing in the early central auditory system is the phase-locking and mode-locking of action potentials to frequencies in a stimulus. Phase-locking to stimulus frequencies has been shown in the auditory nerve, the cochlear nucleus, the inferior colliculus, and the auditory thalamus. By phase- and mode-locking in this way, the auditory brainstem is known to preserve a good deal of the temporal and low-passed frequency information from the original sound; this is evident by measuring the auditory brainstem response using EEG. This temporal preservation is one way to argue directly for the temporal theory of pitch perception, and to argue indirectly against the place theory of pitch perception.

Melody processing in the secondary auditory cortex

Studies suggest that individuals are capable of automatically detecting a difference or anomaly in a melody such as an out of tune pitch which does not fit with their previous music experience. This automatic processing occurs in the secondary auditory cortex. Brattico, Tervaniemi, Naatanen, and Peretz (2006) performed one such study to determine if the detection of tones that do not fit an individual's expectations can occur automatically. They recorded event-related potentials (ERPs) in nonmusicians as they were presented unfamiliar melodies with either an out of tune pitch or an out of key pitch while participants were either distracted from the sounds or attending to the melody. Both conditions revealed an early frontal negativity independent of where attention was directed. This negativity originated in the auditory cortex, more precisely in the supratemporal lobe (which corresponds with the secondary auditory cortex) with greater activity from the right hemisphere. The negativity response was larger for pitch that was out of tune than that which was out of key. Ratings of musical incongruity were higher for out of tune pitch melodies than for out of key pitch. In the focused attention condition, out of key and out of tune pitches produced late parietal positivity. The findings of Brattico et al. (2006) suggest that there is automatic and rapid processing of melodic properties in the secondary auditory cortex. The findings that pitch incongruities were detected automatically, even in processing unfamiliar melodies, suggests that there is an automatic comparison of incoming information with long term knowledge of musical scale properties, such as culturally influenced rules of musical properties (common chord progressions, scale patterns, etc.) and individual expectations of how the melody should proceed. The auditory area processes the sound of the music. The auditory area is located in the temporal lobe. The temporal lobe deals with the recognition and perception of auditory stimuli, memory, and speech (Kinser, 2012).

Role of right auditory cortex in fine pitch resolution

The primary auditory cortex is one of the main areas associated with superior pitch resolution.
 
The right secondary auditory cortex has finer pitch resolution than the left. Hyde, Peretz and Zatorre (2008) used functional magnetic resonance imaging (fMRI) in their study to test the involvement of right and left auditory cortical regions in frequency processing of melodic sequences. As well as finding superior pitch resolution in the right secondary auditory cortex, specific areas found to be involved were the planum temporale (PT) in the secondary auditory cortex, and the primary auditory cortex in the medial section of Heschl's gyrus (HG). 

Many neuroimaging studies have found evidence of the importance of right secondary auditory regions in aspects of musical pitch processing, such as melody. Many of these studies such as one by Patterson, Uppenkamp, Johnsrude and Griffiths (2002) also find evidence of a hierarchy of pitch processing. Patterson et al. (2002) used spectrally matched sounds which produced: no pitch, fixed pitch or melody in an fMRI study and found that all conditions activated HG and PT. Sounds with pitch activated more of these regions than sounds without. When a melody was produced activation spread to the superior temporal gyrus (STG) and planum polare (PP). These results support the existence of a pitch processing hierarchy.

Rhythm

The belt and parabelt areas of the right hemisphere are involved in processing rhythm. When individuals are preparing to tap out a rhythm of regular intervals (1:2 or 1:3) the left frontal cortex, left parietal cortex, and right cerebellum are all activated. With more difficult rhythms such as a 1:2.5, more areas in the cerebral cortex and cerebellum are involved. EEG recordings have also shown a relationship between brain electrical activity and rhythm perception. Snyder and Large (2005) performed a study examining rhythm perception in human subjects, finding that activity in the gamma band (20 – 60 Hz) corresponds to the beats in a simple rhythm. Two types of gamma activity were found by Snyder & Large: induced gamma activity, and evoked gamma activity. Evoked gamma activity was found after the onset of each tone in the rhythm; this activity was found to be phase-locked (peaks and troughs were directly related to the exact onset of the tone) and did not appear when a gap (missed beat) was present in the rhythm. Induced gamma activity, which was not found to be phase-locked, was also found to correspond with each beat. However, induced gamma activity did not subside when a gap was present in the rhythm, indicating that induced gamma activity may possibly serve as a sort of internal metronome independent of auditory input. The motor and auditory areas are located in the cerebrum of the brain. The motor area processes the rhythm of the music (Dean, 2013). The motor area of the brain is located in the parietal lobe. The parietal lobe also deals with orientation, recognition, and perception.

Tonality

Tonality describes the relationships between the elements of melody and harmony – tones, intervals, chords, and scales. These relationships are often characterised as hierarchical, such that one of the elements dominates or attracts another. They occur both within and between every type of element, creating a rich and time-varying percept between tones and their melodic, harmonic, and chromatic contexts. In one conventional sense, tonality refers to just the major and minor scale types – examples of scales whose elements are capable of maintaining a consistent set of functional relationships. The most important functional relationship is that of the tonic note and the tonic chord with the rest of the scale. The tonic is the element which tends to assert its dominance and attraction over all others, and it functions as the ultimate point of attraction, rest and resolution for the scale.

The right auditory cortex is primarily involved in perceiving pitch, and parts of harmony, melody and rhythm. One study by Petr Janata found that there are tonality-sensitive areas in the medial prefrontal cortex, the cerebellum, the superior temporal sulci of both hemispheres and the superior temporal gyri (which has a skew towards the right hemisphere).

Music production and performance

Motor control functions

Musical performance usually involves at least three elementary motor control functions: timing, sequencing, and spatial organization of motor movements. Accuracy in timing of movements is related to musical rhythm. Rhythm, the pattern of temporal intervals within a musical measure or phrase, in turn creates the perception of stronger and weaker beats. Sequencing and spatial organization relate to the expression of individual notes on a musical instrument

These functions and their neural mechanisms have been investigated separately in many studies, but little is known about their combined interaction in producing a complex musical performance. The study of music requires examining them together.

Timing

Although neural mechanisms involved in timing movement have been studied rigorously over the past 20 years, much remains controversial. The ability to phrase movements in precise time has been accredited to a neural metronome or clock mechanism where time is represented through oscillations or pulses. An opposing view to this metronome mechanism has also been hypothesized stating that it is an emergent property of the kinematics of movement itself. Kinematics is defined as parameters of movement through space without reference to forces (for example, direction, velocity and acceleration).

Functional neuroimaging studies, as well as studies of brain-damaged patients, have linked movement timing to several cortical and sub-cortical regions, including the cerebellum, basal ganglia and supplementary motor area (SMA). Specifically the basal ganglia and possibly the SMA have been implicated in interval timing at longer timescales (1 second and above), while the cerebellum may be more important for controlling motor timing at shorter timescales (milliseconds). Furthermore, these results indicate that motor timing is not controlled by a single brain region, but by a network of regions that control specific parameters of movement and that depend on the relevant timescale of the rhythmic sequence.

Sequencing

Motor sequencing has been explored in terms of either the ordering of individual movements, such as finger sequences for key presses, or the coordination of subcomponents of complex multi-joint movements. Implicated in this process are various cortical and sub-cortical regions, including the basal ganglia, the SMA and the pre-SMA, the cerebellum, and the premotor and prefrontal cortices, all involved in the production and learning of motor sequences but without explicit evidence of their specific contributions or interactions amongst one another. In animals, neurophysiological studies have demonstrated an interaction between the frontal cortex and the basal ganglia during the learning of movement sequences. Human neuroimaging studies have also emphasized the contribution of the basal ganglia for well-learned sequences.

The cerebellum is arguably important for sequence learning and for the integration of individual movements into unified sequences, while the pre-SMA and SMA have been shown to be involved in organizing or chunking of more complex movement sequences. Chunking, defined as the re-organization or re-grouping of movement sequences into smaller sub-sequences during performance, is thought to facilitate the smooth performance of complex movements and to improve motor memory. Lastly, the premotor cortex has been shown to be involved in tasks that require the production of relatively complex sequences, and it may contribute to motor prediction.

Spatial organization

Few studies of complex motor control have distinguished between sequential and spatial organization, yet expert musical performances demand not only precise sequencing but also spatial organization of movements. Studies in animals and humans have established the involvement of parietal, sensory–motor and premotor cortices in the control of movements, when the integration of spatial, sensory and motor information is required. Few studies so far have explicitly examined the role of spatial processing in the context of musical tasks.

Auditory-motor interactions

Feedforward and feedback interactions

An auditory–motor interaction may be loosely defined as any engagement of or communication between the two systems. Two classes of auditory-motor interaction are "feedforward" and "feedback". In feedforward interactions, it is the auditory system that predominately influences the motor output, often in a predictive way. An example is the phenomenon of tapping to the beat, where the listener anticipates the rhythmic accents in a piece of music. Another example is the effect of music on movement disorders: rhythmic auditory stimuli have been shown to improve walking ability in Parkinson's disease and stroke patients.

Feedback interactions are particularly relevant in playing an instrument such as a violin, or in singing, where pitch is variable and must be continuously controlled. If auditory feedback is blocked, musicians can still execute well-rehearsed pieces, but expressive aspects of performance are affected. When auditory feedback is experimentally manipulated by delays or distortions, motor performance is significantly altered: asynchronous feedback disrupts the timing of events, whereas alteration of pitch information disrupts the selection of appropriate actions, but not their timing. This suggests that disruptions occur because both actions and percepts depend on a single underlying mental representation.

Models of auditory–motor interactions

Several models of auditory–motor interactions have been advanced. The model of Hickok and Poeppel, which is specific for speech processing, proposes that a ventral auditory stream maps sounds onto meaning, whereas a dorsal stream maps sounds onto articulatory representations. They and others suggest that posterior auditory regions at the parieto-temporal boundary are crucial parts of the auditory–motor interface, mapping auditory representations onto motor representations of speech, and onto melodies.

Mirror/echo neurons and auditory–motor interactions

The mirror neuron system has an important role in neural models of sensory–motor integration. There is considerable evidence that neurons respond to both actions and the accumulated observation of actions. A system proposed to explain this understanding of actions is that visual representations of actions are mapped onto our own motor system.

Some mirror neurons are activated both by the observation of goal-directed actions, and by the associated sounds produced during the action. This suggests that the auditory modality can access the motor system. While these auditory–motor interactions have mainly been studied for speech processes, and have focused on Broca's area and the vPMC, as of 2011, experiments have begun to shed light on how these interactions are needed for musical performance. Results point to a broader involvement of the dPMC and other motor areas.

Music and language

Certain aspects of language and melody have been shown to be processed in near identical functional brain areas. Brown, Martinez and Parsons (2006) examined the neurological structural similarities between music and language. Utilizing positron emission tomography (PET), the findings showed that both linguistic and melodic phrases produced activation in almost identical functional brain areas. These areas included the primary motor cortex, supplementary motor area, Broca's area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus and posterior cerebellum. Differences were found in lateralization tendencies as language tasks favoured the left hemisphere, but the majority of activations were bilateral which produced significant overlap across modalities.

Syntactical information mechanisms in both music and language have been shown to be processed similarly in the brain. Jentschke, Koelsch, Sallat and Friederici (2008) conducted a study investigating the processing of music in children with specific language impairments (SLI). Children with typical language development (TLD) showed ERP patterns different from those of children with SLI, which reflected their challenges in processing music-syntactic regularities. Strong correlations between the ERAN (Early Right Anterior Negativity—a specific ERP measure) amplitude and linguistic and musical abilities provide additional evidence for the relationship of syntactical processing in music and language.

However, production of melody and production of speech may be subserved by different neural networks. Stewart, Walsh, Frith and Rothwell (2001) studied the differences between speech production and song production using transcranial magnetic stimulation (TMS). Stewart et al. found that TMS applied to the left frontal lobe disturbs speech but not melody supporting the idea that they are subserved by different areas of the brain. The authors suggest that a reason for the difference is that speech generation can be localized well but the underlying mechanisms of melodic production cannot. Alternatively, it was also suggested that speech production may be less robust than melodic production and thus more susceptible to interference.

Language processing is a function more of the left side of the brain than the right side, particularly Broca's area and Wernicke's area, though the roles played by the two sides of the brain in processing different aspects of language are still unclear. Music is also processed by both the left and the right sides of the brain. Recent evidence further suggest shared processing between language and music at the conceptual level. It has also been found that, among music conservatory students, the prevalence of absolute pitch is much higher for speakers of tone language, even controlling for ethnic background, showing that language influences how musical tones are perceived.

Musician vs. non-musician processing

Professional pianists show less cortical activation for complex finger movement tasks due to structural differences in the brain.

Differences

Brain structure within musicians and non-musicians is distinctly different. Gaser and Schlaug (2003) compared brain structures of professional musicians with non-musicians and discovered gray matter volume differences in motor, auditory and visual-spatial brain regions. Specifically, positive correlations were discovered between musician status (professional, amateur and non-musician) and gray matter volume in the primary motor and somatosensory areas, premotor areas, anterior superior parietal areas and in the inferior temporal gyrus bilaterally. This strong association between musician status and gray matter differences supports the notion that musicians' brains show use-dependent structural changes. Due to the distinct differences in several brain regions, it is unlikely that these differences are innate but rather due to the long-term acquisition and repetitive rehearsal of musical skills. 

Brains of musicians also show functional differences from those of non-musicians. Krings, Topper, Foltys, Erberich, Sparing, Willmes and Thron (2000) utilized fMRI to study brain area involvement of professional pianists and a control group while performing complex finger movements. Krings et al. found that the professional piano players showed lower levels of cortical activation in motor areas of the brain. It was concluded that a lesser amount of neurons needed to be activated for the piano players due to long-term motor practice which results in the different cortical activation patterns. Koeneke, Lutz, Wustenberg and Jancke (2004) reported similar findings in keyboard players. Skilled keyboard players and a control group performed complex tasks involving unimanual and bimanual finger movements. During task conditions, strong hemodynamic responses in the cerebellum were shown by both non-musicians and keyboard players, but non-musicians showed the stronger response. This finding indicates that different cortical activation patterns emerge from long-term motor practice. This evidence supports previous data showing that musicians require fewer neurons to perform the same movements. 

Musicians have been shown to have significantly more developed left planum temporales, and have also shown to have a greater word memory. Chan's study controlled for age, grade point average and years of education and found that when given a 16 word memory test, the musicians averaged one to two more words above their non musical counterparts.

Similarities

Studies have shown that the human brain has an implicit musical ability. Koelsch, Gunter, Friederici and Schoger (2000) investigated the influence of preceding musical context, task relevance of unexpected chords and the degree of probability of violation on music processing in both musicians and non-musicians. Findings showed that the human brain unintentionally extrapolates expectations about impending auditory input. Even in non-musicians, the extrapolated expectations are consistent with music theory. The ability to process information musically supports the idea of an implicit musical ability in the human brain. In a follow-up study, Koelsch, Schroger, and Gunter (2002) investigated whether ERAN and N5 could be evoked preattentively in non-musicians. Findings showed that both ERAN and N5 can be elicited even in a situation where the musical stimulus is ignored by the listener indicating that there is a highly differentiated preattentive musicality in the human brain.

Gender differences

Minor neurological differences regarding hemispheric processing exist between brains of males and females. Koelsch, Maess, Grossmann and Friederici (2003) investigated music processing through EEG and ERPs and discovered gender differences. Findings showed that females process music information bilaterally and males process music with a right-hemispheric predominance. However, the early negativity of males was also present over the left hemisphere. This indicates that males do not exclusively utilize the right hemisphere for musical information processing. In a follow-up study, Koelsch, Grossman, Gunter, Hahne, Schroger and Friederici (2003) found that boys show lateralization of the early anterior negativity in the left hemisphere but found a bilateral effect in girls. This indicates a developmental effect as early negativity is lateralized in the right hemisphere in men and in the left hemisphere in boys.

Handedness differences

It has been found that subjects who are lefthanded, particularly those who are also ambidextrous, perform better than righthanders on short term memory for the pitch. It was hypothesized that this handedness advantage is due to the fact that lefthanders have more duplication of storage in the two hemispheres than do righthanders. Other work has shown that there are pronounced differences between righthanders and lefthanders (on a statistical basis) in how musical patterns are perceived, when sounds come from different regions of space. This has been found, for example, in the Octave illusion and the Scale illusion.

Musical imagery

Musical imagery refers to the experience of replaying music by imagining it inside the head. Musicians show a superior ability for musical imagery due to intense musical training. Herholz, Lappe, Knief and Pantev (2008) investigated the differences in neural processing of a musical imagery task in musicians and non-musicians. Utilizing magnetoencephalography (MEG), Herholz et al. examined differences in the processing of a musical imagery task with familiar melodies in musicians and non-musicians. Specifically, the study examined whether the mismatch negativity (MMN) can be based solely on imagery of sounds. The task involved participants listening to the beginning of a melody, continuation of the melody in his/her head and finally hearing a correct/incorrect tone as further continuation of the melody. The imagery of these melodies was strong enough to obtain an early preattentive brain response to unanticipated violations of the imagined melodies in the musicians. These results indicate similar neural correlates are relied upon for trained musicians imagery and perception. Additionally, the findings suggest that modification of the imagery mismatch negativity (iMMN) through intense musical training results in achievement of a superior ability for imagery and preattentive processing of music. 

Perceptual musical processes and musical imagery may share a neural substrate in the brain. A PET study conducted by Zatorre, Halpern, Perry, Meyer and Evans (1996) investigated cerebral blood flow (CBF) changes related to auditory imagery and perceptual tasks. These tasks examined the involvement of particular anatomical regions as well as functional commonalities between perceptual processes and imagery. Similar patterns of CBF changes provided evidence supporting the notion that imagery processes share a substantial neural substrate with related perceptual processes. Bilateral neural activity in the secondary auditory cortex was associated with both perceiving and imagining songs. This implies that within the secondary auditory cortex, processes underlie the phenomenological impression of imagined sounds. The supplementary motor area (SMA) was active in both imagery and perceptual tasks suggesting covert vocalization as an element of musical imagery. CBF increases in the inferior frontal polar cortex and right thalamus suggest that these regions may be related to retrieval and/or generation of auditory information from memory.

Absolute pitch

Musicians possessing perfect pitch can identify the pitch of musical tones without external reference.

Absolute pitch (AP) is defined as the ability to identify the pitch of a musical tone or to produce a musical tone at a given pitch without the use of an external reference pitch. Neuroscientific research has not discovered a distinct activation pattern common for possessors of AP. Zatorre, Perry, Beckett, Westbury and Evans (1998) examined the neural foundations of AP using functional and structural brain imaging techniques. Positron emission tomography (PET) was utilized to measure cerebral blood flow (CBF) in musicians possessing AP and musicians lacking AP. When presented with musical tones, similar patterns of increased CBF in auditory cortical areas emerged in both groups. AP possessors and non-AP subjects demonstrated similar patterns of left dorsolateral frontal activity when they performed relative pitch judgments. However, in non-AP subjects activation in the right inferior frontal cortex was present whereas AP possessors showed no such activity. This finding suggests that musicians with AP do not need access to working memory devices for such tasks. These findings imply that there is no specific regional activation pattern unique to AP. Rather, the availability of specific processing mechanisms and task demands determine the recruited neural areas.

Emotion

Emotions induced by music activate similar frontal brain regions compared to emotions elicited by other stimuli. Schmidt and Trainor (2001) discovered that valence (i.e. positive vs. negative) of musical segments was distinguished by patterns of frontal EEG activity. Joyful and happy musical segments were associated with increases in left frontal EEG activity whereas fearful and sad musical segments were associated with increases in right frontal EEG activity. Additionally, the intensity of emotions was differentiated by the pattern of overall frontal EEG activity. Overall frontal region activity increased as affective musical stimuli became more intense.

Music is able to create an incredibly pleasurable experience that can be described as "chills".[78] Blood and Zatorre (2001) used PET to measure changes in cerebral blood flow while participants listened to music that they knew to give them the "chills" or any sort of intensely pleasant emotional response. They found that as these chills increase, many changes in cerebral blood flow are seen in brain regions such as the amygdala, orbitofrontal cortex, ventral striatum, midbrain, and the ventral medial prefrontal cortex. Many of these areas appear to be linked to reward, motivation, emotion, and arousal, and are also activated in other pleasurable situations. The resulting pleasure responses enable the release dopamine, serotonin, and oxytocin. Nucleus accumbens (a part of striatum) is involved in both music related emotions, as well as rhythmic timing. 

When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain. The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingulate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.

Memory

Neuropsychology of musical memory

Musical memory involves both explicit and implicit memory systems. Explicit musical memory is further differentiated between episodic (where, when and what of the musical experience) and semantic (memory for music knowledge including facts and emotional concepts). Implicit memory centers on the 'how' of music and involves automatic processes such as procedural memory and motor skill learning – in other words skills critical for playing an instrument. Samson and Baird (2009) found that the ability of musicians with Alzheimer's Disease to play an instrument (implicit procedural memory) may be preserved.

Neural correlates of musical memory

A PET study looking into the neural correlates of musical semantic and episodic memory found distinct activation patterns. Semantic musical memory involves the sense of familiarity of songs. The semantic memory for music condition resulted in bilateral activation in the medial and orbital frontal cortex, as well as activation in the left angular gyrus and the left anterior region of the middle temporal gyri. These patterns support the functional asymmetry favouring the left hemisphere for semantic memory. Left anterior temporal and inferior frontal regions that were activated in the musical semantic memory task produced activation peaks specifically during the presentation of musical material, suggestion that these regions are somewhat functionally specialized for musical semantic representations. 

Episodic memory of musical information involves the ability to recall the former context associated with a musical excerpt. In the condition invoking episodic memory for music, activations were found bilaterally in the middle and superior frontal gyri and precuneus, with activation predominant in the right hemisphere. Other studies have found the precuneus to become activated in successful episodic recall. As it was activated in the familiar memory condition of episodic memory, this activation may be explained by the successful recall of the melody.

When it comes to memory for pitch, there appears to be a dynamic and distributed brain network subserves pitch memory processes. Gaab, Gaser, Zaehle, Jancke and Schlaug (2003) examined the functional anatomy of pitch memory using functional magnetic resonance imaging (fMRI). An analysis of performance scores in a pitch memory task resulted in a significant correlation between good task performance and the supramarginal gyrus (SMG) as well as the dorsolateral cerebellum. Findings indicate that the dorsolateral cerebellum may act as a pitch discrimination processor and the SMG may act as a short-term pitch information storage site. The left hemisphere was found to be more prominent in the pitch memory task than the right hemispheric regions.

Therapeutic effects of music on memory

Musical training has been shown to aid memory. Altenmuller et al. studied the difference between active and passive musical instruction and found both that over a longer (but not short) period of time, the actively taught students retained much more information than the passively taught students. The actively taught students were also found to have greater cerebral cortex activation. It should also be noted that the passively taught students weren't wasting their time; they, along with the active group, displayed greater left hemisphere activity, which is typical in trained musicians.

Research suggests we listen to the same songs repeatedly because of musical nostalgia. One major study, published in the journal Memory & Cognition, found that music enables the mind to evoke memories of the past.

Attention

Treder et al. identified neural correlates of attention when listening to simplified polyphonic music patterns. In a musical oddball experiment, they had participants shift selective attention to one out of three different instruments in music audio clips, with each instrument occasionally playing one or several notes deviating from an otherwise repetitive pattern. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument could be classified offline with high accuracy. This indicates that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for building more ergonomic music-listing based brain-computer interfaces.

Development

Musical four-year-olds have been found to have one greater left hemisphere intrahemispheric coherence. Musicians have been found to have more developed anterior portions of the corpus callosum in a study by Cowell et al. in 1992. This was confirmed by a study by Schlaug et al. in 1995 that found that classical musicians between the ages of 21 and 36 have significantly greater anterior corpora callosa than the non-musical control. Schlaug also found that there was a strong correlation of musical exposure before the age of seven, and a great increase in the size of the corpus callosum. These fibers join together the left and right hemispheres and indicate an increased relaying between both sides of the brain. This suggests the merging between the spatial- emotiono-tonal processing of the right brain and the linguistical processing of the left brain. This large relaying across many different areas of the brain might contribute to music's ability to aid in memory function.

Impairment

Focal hand dystonia

Focal hand dystonia is a task-related movement disorder associated with occupational activities that require repetitive hand movements. Focal hand dystonia is associated with abnormal processing in the premotor and primary sensorimotor cortices. An fMRI study examined five guitarists with focal hand dystonia. The study reproduced task-specific hand dystonia by having guitarists use a real guitar neck inside the scanner as well as performing a guitar exercise to trigger abnormal hand movement. The dystonic guitarists showed significantly more activation of the contralateral primary sensorimotor cortex as well as a bilateral underactivation of premotor areas. This activation pattern represents abnormal recruitment of the cortical areas involved in motor control. Even in professional musicians, widespread bilateral cortical region involvement is necessary to produce complex hand movements such as scales and arpeggios. The abnormal shift from premotor to primary sensorimotor activation directly correlates with guitar-induced hand dystonia.

Music agnosia

Music agnosia, an auditory agnosia, is a syndrome of selective impairment in music recognition. Three cases of music agnosia are examined by Dalla Bella and Peretz (1999); C.N., G.L., and I.R.. All three of these patients suffered bilateral damage to the auditory cortex which resulted in musical difficulties while speech understanding remained intact. Their impairment is specific to the recognition of once familiar melodies. They are spared in recognizing environmental sounds and in recognizing lyrics. Peretz (1996) has studied C.N.'s music agnosia further and reports an initial impairment of pitch processing and spared temporal processing. C.N. later recovered in pitch processing abilities but remained impaired in tune recognition and familiarity judgments.

Musical agnosias may be categorized based on the process which is impaired in the individual. Apperceptive music agnosia involves an impairment at the level of perceptual analysis involving an inability to encode musical information correctly. Associative music agnosia reflects an impaired representational system which disrupts music recognition. Many of the cases of music agnosia have resulted from surgery involving the middle cerebral artery. Patient studies have surmounted a large amount of evidence demonstrating that the left side of the brain is more suitable for holding long-term memory representations of music and that the right side is important for controlling access to these representations. Associative music agnosias tend to be produced by damage to the left hemisphere, while apperceptive music agnosia reflects damage to the right hemisphere.

Congenital amusia

Congenital amusia, otherwise known as tone deafness, is a term for lifelong musical problems which are not attributable to mental retardation, lack of exposure to music or deafness, or brain damage after birth. Amusic brains have been found in fMRI studies to have less white matter and thicker cortex than controls in the right inferior frontal cortex. These differences suggest abnormal neuronal development in the auditory cortex and inferior frontal gyrus, two areas which are important in musical-pitch processing.

Studies on those with amusia suggest different processes are involved in speech tonality and musical tonality. Congenital amusics lack the ability to distinguish between pitches and so are for example unmoved by dissonance and playing the wrong key on a piano. They also cannot be taught to remember a melody or to recite a song; however, they are still capable of hearing the intonation of speech, for example, distinguishing between "You speak French" and "You speak French?" when spoken.

Amygdala damage

Damage to the amygdala may impair recognition of scary music.
 
Damage to the amygdala has selective emotional impairments on musical recognition. Gosselin, Peretz, Johnsen and Adolphs (2007) studied S.M., a patient with bilateral damage of the amygdala with the rest of the temporal lobe undamaged and found that S.M. was impaired in recognition of scary and sad music. S.M.'s perception of happy music was normal, as was her ability to use cues such as tempo to distinguish between happy and sad music. It appears that damage specific to the amygdala can selectively impair recognition of scary music.

Selective deficit in music reading

Specific musical impairments may result from brain damage leaving other musical abilities intact. Cappelletti, Waley-Cohen, Butterworth and Kopelman (2000) studied a single case study of patient P.K.C., a professional musician who sustained damage to the left posterior temporal lobe as well as a small right occipitotemporal lesion. After sustaining damage to these regions, P.K.C. was selectively impaired in the areas of reading, writing and understanding musical notation but maintained other musical skills. The ability to read aloud letters, words, numbers and symbols (including musical ones) was retained. However, P.K.C. was unable to read aloud musical notes on the staff regardless of whether the task involved naming with the conventional letter or by singing or playing. Yet despite this specific deficit, P.K.C. retained the ability to remember and play familiar and new melodies.

Auditory arrhythmia

Arrhythmia in the auditory modality is defined as a disturbance of rhythmic sense; and includes deficits such as the inability to rhythmically perform music, the inability to keep time to music and the inability to discriminate between or reproduce rhythmic patterns. A study investigating the elements of rhythmic function examined Patient H.J., who acquired arrhythmia after sustaining a right temporoparietal infarct. Damage to this region impaired H.J.'s central timing system which is essentially the basis of his global rhythmic impairment. H.J. was unable to generate steady pulses in a tapping task. These findings suggest that keeping a musical beat relies on functioning in the right temporal auditory cortex.

Cognitive musicology

From Wikipedia, the free encyclopedia
Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.

Cognitive musicology can be differentiated from other branches of music psychology via its methodological emphasis, using computer modeling to study music-related knowledge representation with roots in artificial intelligence and cognitive science. The use of computer models provides an exacting, interactive medium in which to formulate and test theories.

This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs. This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated.

Even while enjoying the simplest of melodies there are multiple brain processes that are synchronizing to comprehend what is going on. After the stimulus enters and undergoes the processes of the ear, it enters the auditory cortex, part of the temporal lobe, which begins processing the sound by assessing its pitch and volume. From here, brain functioning differs amongst the analysis of different aspects of music. For instance, the rhythm is processed and regulated by the left frontal cortex, the left parietal cortex and the right cerebellum standardly. Tonality, the building of musical structure around a central chord, is assessed by the prefrontal cortex and cerebellum (Abram, 2015). Music is able to access many different brain functions that play an integral role in other higher brain functions such as motor control, memory, language, reading and emotion. Research has shown that music can be used as an alternative method to access these functions that may be unavailable through non-musical stimulus due to a disorder. Musicology explores the use of music and how it can provide alternative transmission routes for information processing in the brain for diseases such as Parkinson's and dyslexia as well.

Notable researchers

The polymath Christopher Longuet-Higgins, who coined the term "cognitive science", is one of the pioneers of cognitive musicology. Among other things, he is noted for the computational implementation of an early key-finding algorithm. Key finding is an essential element of tonal music, and the key-finding problem has attracted considerable attention in the psychology of music over the past several decades. Carol Krumhansl and Mark Schmuckler proposed an empirically grounded key-finding algorithm which bears their names. Their approach is based on key-profiles which were painstakingly determined by what has come to be known as the probe-tone technique. This algorithm has successfully been able to model the perception of musical key in short excerpts of music, as well as to track listeners' changing sense of key movement throughout an entire piece of music. David Temperley, whose early work within the field of cognitive musicology applied dynamic programming to aspects of music cognition, has suggested a number of refinements to the Krumhansl-Schmuckler Key-Finding Algorithm.

Otto Laske was a champion of cognitive musicology. A collection of papers that he co-edited served to heighten the visibility of cognitive musicology and to strengthen its association with AI and music. The foreword of this book reprints a free-wheeling interview with Marvin Minsky, one of the founding fathers of AI, in which he discusses some of his early writings on music and the mind. AI researcher turned cognitive scientist Douglas Hofstadter has also contributed a number of ideas pertaining to music from an AI perspective. Musician Steve Larson, who worked for a time in Hofstadter's lab, formulated a theory of "musical forces" derived by analogy with physical forces. Hofstadter also weighed in on David Cope's experiments in musical intelligence, which take the form of a computer program called EMI which produces music in the form of, say, Bach, or Chopin, or Cope. 

Cope's programs are written in Lisp, which turns out to be a popular language for research in cognitive musicology. Desain and Honing have exploited Lisp in their efforts to tap the potential of microworld methodology in cognitive musicology research. Also working in Lisp, Heinrich Taube has explored computer composition from a wide variety of perspectives. There are, of course, researchers who chose to use languages other than Lisp for their research into the computational modeling of musical processes. Tim Rowe, for example, explores "machine musicianship" through C++ programming. A rather different computational methodology for researching musical phenomena is the toolkit approach advocated by David Huron. At a higher level of abstraction, Gerraint Wiggins has investigated general properties of music knowledge representations such as structural generality and expressive completeness.

Although a great deal of cognitive musicology research features symbolic computation, notable contributions have been made from the biologically inspired computational paradigms. For example, Jamshed Bharucha and Peter Todd have modeled music perception in tonal music with neural networks. Al Biles has applied genetic algorithms to the composition of jazz solos. Numerous researchers have explored algorithmic composition grounded in a wide range of mathematical formalisms.

Within cognitive psychology, among the most prominent researchers is Diana Deutsch, who has engaged in a wide variety of work ranging from studies of absolute pitch and musical illusions to the formulation of musical knowledge representations to relationships between music and language. Equally important is Aniruddh D. Patel, whose work combines traditional methodologies of cognitive psychology with neuroscience. Patel is also the author of a comprehensive survey of cognitive science research on music.

Perhaps the most significant contribution to viewing music from a linguistic perspective is the Generative Theory of Tonal Music (GTTM) proposed by Fred Lerdahl and Ray Jackendoff. Although GTTM is presented at the algorithmic level of abstraction rather than the implementational level, their ideas have found computational manifestations in a number of computational projects.

For the German-speaking area, Laske's conception of cognitive musicology has been advanced by Uwe Seifert in his book Systematische Musiktheorie und Kognitionswissenschaft. Zur Grundlegung der kognitiven Musikwissenschaft ("Systematic music theory and cognitive science. The foundation of cognitive musicology") and subsequent publications.

Music and language acquisition skills

Both music and speech rely on sound processing and require interpretation of several sound features such as timbre, pitch, duration, and their interactions (Elzbieta, 2015). A fMRI study revealed that the Broca's and Wernicke's areas, two areas that are known to activated during speech and language processing, were found activated while the subject was listening to unexpected musical chords (Elzbieta, 2015). This relation between language and music may explain why, it has been found that exposure to music has produced an acceleration in the development of behaviors related to the acquisition of language. The Suzuki music education which is very widely known, emphasizes learning music by ear over reading musical notation and preferably begins with formal lessons between the ages of 3 and 5 years. One fundamental reasoning in favor of this education points to a parallelism between natural speech acquisition and purely auditory based musical training as opposed to musical training due to visual cues. There is evidence that children who take music classes have obtained skills to help them in language acquisition and learning (Oechslin, 2015), an ability that relies heavily on the dorsal pathway. Other studies show an overall enhancement of verbal intelligence in children taking music classes. Since both activities tap into several integrated brain functions and have shared brain pathways it is understandable why strength in music acquisition might also correlate with strength in language acquisition.

Music and pre-natal development

Extensive prenatal exposure to a melody has been shown to induce neural representations that last for several months. In a study done by Partanen in 2013, mothers in the learning group listened to the ‘Twinkle twinkle little star' melody 5 times per week during their last trimester. After birth and again at the age of 4 months, they played the infants in the control and learning group a modified melody in which some of the notes were changed. Both at birth and at the age of 4 months, infants in the learning group had stronger event related potentials to the unchanged notes than the control group. Since listening to music at a young age can already map out neural representations that are lasting, exposure to music could help strengthen brain plasticity in areas of the brain that are involved in language and speech processing.

Music therapy effect on cognitive disorders

If neural pathways can be stimulated with entertainment there is a higher chance that it will be more easily accessible. This illustrates why music is so powerful and can be used in such a myriad of different therapies. Music that is enjoyable to a person illicit an interesting response that we are all aware of. Listening to music is not perceived as a chore because it is enjoyable, however our brain is still learning and utilizing the same brain functions as it would when speaking or acquiring language. Music has the capability to be a very productive form of therapy mostly because it is stimulating, entertaining, and appears rewarding. Using fMRI, Menon and Levitin found for the first time that listening to music strongly modulates activity in a network of mesolimbic structures involved in reward processing. This included the nucleus accumbens and the ventral tegmental area (VTA), as well as the hypothalamus, and insula, which are all thought to be involved in regulating autonomic and physiological responses to rewarding and emotional stimuli (Gold, 2013).

Pitch perception was positively correlated with phonemic awareness and reading abilities in children (Flaugnacco, 2014). Likewise, the ability to tap to a rhythmic beat correlated with performance on reading and attention tests (Flaugnacco, 2014). These are only a fraction of the studies that have linked reading skills with rhythmic perception, which is shown in a meta-analysis of 25 cross-sectional studies that found a significant association between music training and reading skills (Butzlaff, 2000). Since the correlation is so extensive it is natural that researchers have tried to see if music could serve as an alternative pathway to strengthen reading abilities in people with developmental disorders such as dyslexia. Dyslexia is a disorder characterized by a long lasting difficulty in reading acquisition, specifically text decoding. Reading results have been shown to be slow and inaccurate, despite adequate intelligence and instruction. The difficulties have been shown to stem from a phonological core deficit that impacts reading comprehension, memory and prediction abilities (Flaugnacco, 2014). It was shown that music training modified reading and phonological abilities even when these skills are severely impaired. By improving temporal processing and rhythm abilities, through training, phonological awareness and reading skills in children with dyslexia were improved. The OPERA hypothesis proposed by Patel (2011), states that since music places higher demands on the process than speech it brings adaptive brain plasticity of the same neural network involved in language processing. 

Parkinson's disease is a complex neurological disorder that negatively impacts both motor and non-motor functions caused by the degeneration of dopaminergic (DA) neurons in the substantia nigra (Ashoori, 2015). This in turn leads to a DA deficiency in the basal ganglia. The deficiencies of dopamine in these areas of the brain have shown to cause symptoms such as tremors at rest, rigidity, akinesia, and postural instability. They are also associated with impairments of internal timing of an individual (Ashoori, 2015). Rhythm is a powerful sensory cue that has shown to help regulate motor timing and coordination when there is a deficient internal timing system in the brain. Some studies have shown that musically cued gait training significantly improves multiple deficits of Parkinson's, including in gait, motor timing, and perceptual timing. Ashoori's study consisted of 15 non-demented patients with idiopathic Parkinson's who had no prior musical training and maintained their dopamine therapy during the trials. There were three 30-min training sessions per week for 1 month where the participants walked to the beats of German folk music without explicit instructions to synchronize their footsteps to the beat. Compared to pre-training gait performance, the Parkinson's patients showed significant improvement in gait velocity and stride length during the training sessions. The gait improvement was sustained for 1 month after training, which indicates a lasting therapeutic effect. Even though this was uncued it shows how the gait of these Parkinson's patients was automatically synchronized with the rhythm of the music. The lasting therapeutic effect also shows that this might have affected the internal timing of the individual in a way that could not be accessed by other means.

Music and emotion

From Wikipedia, the free encyclopedia

Simon Vouet, 'Saint Cecilia'.
 
The study of 'music and emotion' seeks to understand the psychological relationship between human affect and music. It is a branch of music psychology with numerous areas of study, including the nature of emotional reactions to music, how characteristics of the listener may determine which emotions are felt, and which components of a musical composition or performance may elicit certain reactions. The field draws upon, and has significant implications for, such areas as philosophy, musicology, music therapy, music theory and aesthetics, as well as the acts of musical composition and performance.

Philosophical approaches

Appearance emotionalism

Two of the most influential philosophers in the aesthetics of music are Stephen Davies and Jerrold Levinson. Davies calls his view of the expressiveness of emotions in music "appearance emotionalism", which holds that music expresses emotion without feeling it. Objects can convey emotion because their structures can contain certain characteristics that resemble emotional expression. "The resemblance that counts most for music's expressiveness ... is between music's temporally unfolding dynamic structure and configurations of human behaviour associated with the expression of emotion." The observer can note emotions from the listener's posture, gait, gestures, attitude, and comportment.

Associations between musical features and emotion differ among individuals. Appearance emotionalism claims many listeners' perceiving associations constitutes the expressiveness of music. Which musical features are more commonly associated with which emotions is part of music psychology. Davies claims that expressiveness is an objective property of music and not subjective in the sense of being projected into the music by the listener. Music's expressiveness is certainly response-dependent, i.e. it is realized in the listener's judgement. Skilled listeners very similarly attribute emotional expressiveness to a certain piece of music, thereby indicating according to Davies (2006) that the expressiveness of music is somewhat objective because if the music lacked expressiveness, then no expression could be projected into it as a reaction to the music.

The process theory

The philosopher Jenefer Robinson assumes the existence of a mutual dependence between cognition and elicitation in her description of 'emotions as process, music as process' theory (or 'process' theory). Robinson argues that the process of emotional elicitation begins with an 'automatic, immediate response that initiates motor and autonomic activity and prepares us for possible action' causing a process of cognition that may enable listeners to 'name' the felt emotion. This series of events continually exchanges with new, incoming information. Robinson argues that emotions may transform into one another, causing blends, conflicts, and ambiguities that make impede describing with one word the emotional state that one experiences at any given moment; instead, inner feelings are better thought of as the products of multiple emotional 'streams'. Robinson argues that music is a series of simultaneous processes, and that it therefore is an ideal medium for mirroring such more 'cognitive' aspects of emotion as musical themes' 'desiring' resolution or leitmotif's mirrors memory processes. These simultaneous musical processes can reinforce or conflict with each other and thus also express the way one emotion 'morphs into another over time'.

Conveying emotion through music

The ability to perceive emotion in music is said to develop early in childhood, and improve significantly throughout development. The capacity to perceive emotion in music is also subject to cultural influences, and both similarities and differences in emotion perception have been observed in cross-cultural studies. Empirical research has looked at which emotions can be conveyed as well as what structural factors in music help contribute to the perceived emotional expression. There are two schools of thought on how we interpret emotion in music. The cognitivists' approach argues that music simply displays an emotion, but does not allow for the personal experience of emotion in the listener. Emotivists argue that music elicits real emotional responses in the listener.

It has been argued that the emotion experienced from a piece of music is a multiplicative function of structural features, performance features, listener features and contextual features of the piece, shown as:
Experienced Emotion = Structural features × Performance features × Listener features × Contextual features
where:
Structural features = Segmental features × Suprasegmental features
Performance features = Performer skill × Performer state
Listener features = Musical expertise × Stable disposition × Current motivation
Contextual features = Location × Event

Structural features

Structural features are divided into two parts, segmental features and suprasegmental features. Segmental features are the individual sounds or tones that make up the music; this includes acoustic structures such as duration, amplitude, and pitch. Suprasegmental features are the foundational structures of a piece, such as melody, tempo and rhythm. There are a number of specific musical features that are highly associated with particular emotions. Within the factors affecting emotional expression in music, tempo is typically regarded as the most important, but a number of other factors, such as mode, loudness, and melody, also influence the emotional valence of the piece.

Structural Feature Definition Associated Emotions
Tempo The speed or pace of a musical piece Fast tempo:happiness, excitement, anger. Slow tempo: sadness, serenity.
Mode The type of scale Major tonality: happiness, joy. Minor tonality: sadness.
Loudness The physical strength and amplitude of a sound Intensity, power, or anger
Melody The linear succession of musical tones that the listener perceives as a single entity Complementing harmonies: happiness, relaxation, serenity. Clashing harmonies: excitement, anger, unpleasantness.
Rhythm The regularly recurring pattern or beat of a song Smooth/consistent rhythm: happiness, peace. Rough/irregular rhythm: amusement, uneasiness. Varied rhythm: joy.

Some studies find that perception of basic emotional features are a cultural universal, though people can more easily perceive emotion, and perceive more nuanced emotion, in music from their own culture.

Music has a direct connection to emotional states present in human beings. Different musical structures have been found to have a relationship with physiological responses. Research has shown that suprasegmental structures such as tonal space, specifically dissonance, create unpleasant negative emotions in participants. The emotional responses were measured with physiological assessments, such as skin conductance and electromyographic signals (EMG), while participants listened to musical excerpts. Further research on psychophysiological measures pertaining to music were conducted and found similar results; musical structures of rhythmic articulation, accentuation, and tempo were found to correlate strongly with physiological measures, the measured used here included heart rate and respiratory monitors that correlated with self-report questionnaires.

Music also affects socially-relevant memories, specifically memories produced by nostalgic musical excerpts (e.g., music from a significant time period in one’s life, like music listened to on road trips). Musical structures are more strongly interpreted in certain areas of the brain when the music evokes nostalgia. The interior frontal gyrus, substantia nigra, cerebellum, and insula were all identified to have a stronger correlation with nostalgic music than not. Brain activity is a very individualized concept with many of the musical excerpts having certain effects based on individuals’ past life experiences, thus this caveat should be kept in mind when generalizing findings across individuals.

Performance features

Performance features refer to the manner in which a piece of music is executed by the performer(s). These are broken into two categories, performer skills, and performer state. Performer skills are the compound ability and appearance of the performer; including physical appearance, reputation, and technical skills. The performer state is the interpretation, motivation, and stage presence of the performer.

Listener features

Listener features refer to the individual and social identity of the listener(s). This includes their personality, age, knowledge of music, and motivation to listen to the music.

Contextual features

Contextual features are aspects of the performance such as the location and the particular occasion for the performance (i.e., funeral, wedding, dance).

These different factors influence expressed emotion at different magnitudes, and their effects are compounded by one another. Thus, experienced emotion is felt to a stronger degree if more factors are present. The order the factors are listed within the model denotes how much weight in the equation they carry. For this reason, the bulk of research has been done in structural features and listener features.

Conflicting cues

Which emotion is perceived is dependent on the context of the piece of music. Past research has argued that opposing emotions like happiness and sadness fall on a bipolar scale, where both cannot be felt at the same time. More recent research has suggested that happiness and sadness are experienced separately, which implies that they can be felt concurrently. One study investigated the latter possibility by having participants listen to computer-manipulated musical excerpts that have mixed cues between tempo and mode. Examples of mix-cue music include a piece with major key and slow tempo, and a minor-chord piece with a fast tempo. Participants then rated the extent to which the piece conveyed happiness or sadness. The results indicated that mixed-cue music conveys both happiness and sadness; however, it remained unclear whether participants perceived happiness and sadness simultaneously or vacillated between these two emotions. A follow-up study was done to examine these possibilities. While listening to mixed or consistent cue music, participants pressed one button when the music conveyed happiness, and another button when it conveyed sadness. The results revealed that subjects pressed both buttons simultaneously during songs with conflicting cues. These findings indicate that listeners can perceive both happiness and sadness concurrently. This has significant implications for how the structural features influence emotion, because when a mix of structural cues is used, a number of emotions may be conveyed.

Specific listener features

Development

Studies indicate that the ability to understand emotional messages in music starts early, and improves throughout child development. Studies investigating music and emotion in children primarily play a musical excerpt for children and have them look at pictorial expressions of faces. These facial expressions display different emotions and children are asked to select the face that best matches the music's emotional tone. Studies have shown that children are able to assign specific emotions to pieces of music; however, there is debate regarding the age at which this ability begins.
Infants
An infant is often exposed to a mother's speech that is musical in nature. It is possible that the motherly singing allows the mother to relay emotional messages to the infant. Infants also tend to prefer positive speech to neutral speech as well as happy music to negative music. It has also been posited that listening to their mother's singing may play a role in identity formation. This hypothesis is supported by a study that interviewed adults and asked them to describe musical experiences from their childhood. Findings showed that music was good for developing knowledge of emotions during childhood.
Pre-school children
These studies have shown that children at the age of 4 are able to begin to distinguish between emotions found in musical excerpts in ways that are similar to adults. The ability to distinguish these musical emotions seems to increase with age until adulthood. However, children at the age of 3 were unable to make the distinction between emotions expressed in music through matching a facial expression with the type of emotion found in the music. Some emotions, such as anger and fear, were also found to be harder to distinguish within music.
Elementary-age children
In studies with four-year-olds and five-year-olds, they are asked to label musical excerpts with the affective labels "happy", "sad", "angry", and "afraid". Results in one study showed that four-year-olds did not perform above chance with the labels "sad" and "angry", and the five-year-olds did not perform above chance with the label "afraid". A follow-up study found conflicting results, where five-year-olds performed much like adults. However, all ages confused categorizing "angry" and "afraid". Pre-school and elementary-age children listened to twelve short melodies, each in either major or minor mode, and were instructed to choose between four pictures of faces: happy, contented, sad, and angry. All the children, even as young as three years old, performed above chance in assigning positive faces with major mode and negative faces with minor mode.

Personality effects

Different people perceive events differently based upon their individual characteristics. Similarly, the emotions elicited by listening to different types of music seem to be affected by factors such as personality and previous musical training. People with the personality type of agreeableness have been found to have higher emotional responses to music in general. Stronger sad feelings have also been associated with people with personality types of agreeableness and neuroticism. While some studies have shown that musical training can be correlated with music that evoked mixed feelings as well as higher IQ and test of emotional comprehension scores, other studies refute the claim that musical training affects perception of emotion in music. It is also worth noting that previous exposure to music can affect later behavioral choices, schoolwork, and social interactions. Therefore, previous music exposure does seem to have an effect on the personality and emotions of a child later in their life, and would subsequently affect their ability to perceive as well as express emotions during exposure to music. Gender, however, has not been shown to lead to a difference in perception of emotions found in music. Further research into which factors affect an individual's perception of emotion in music and the ability of the individual to have music-induced emotions are needed.

Eliciting emotion through music

Along with the research that music conveys an emotion to its listener(s), it has also been shown that music can produce emotion in the listener(s). This view often causes debate because the emotion is produced within the listener, and is consequently hard to measure. In spite of controversy, studies have shown observable responses to elicited emotions, which reinforces the Emotivists' view that music does elicit real emotional responses.

Responses to elicited emotion

The structural features of music not only help convey an emotional message to the listener, but also may create emotion in the listener. These emotions can be completely new feelings or may be an extension of previous emotional events. Empirical research has shown how listeners can absorb the piece's expression as their own emotion, as well as invoke a unique response based on their personal experiences.

Basic emotions

In research on eliciting emotion, participants report personally feeling a certain emotion in response to hearing a musical piece. Researchers have investigated whether the same structures that conveyed a particular emotion could elicit it as well. The researchers presented excerpts of fast tempo, major mode music and slow tempo, minor tone music to participants; these musical structures were chosen because they are known to convey happiness and sadness respectively. Participants rated their own emotions with elevated levels of happiness after listening to music with structures that convey happiness and elevated sadness after music with structures that convey sadness. This evidence suggests that the same structures that convey emotions in music can also elicit those same emotions in the listener. 

In light of this finding, there has been particular controversy about music eliciting negative emotions. Cognitivists argue that choosing to listen to music that elicits negative emotions like sadness would be paradoxical, as listeners would not willingly strive to induce sadness. However, emotivists purport that music does elicit negative emotions, and listeners knowingly choose to listen in order to feel sadness in an impersonal way, similar to a viewer's desire to watch a tragic film. The reasons why people sometimes listen to sad music when feeling sad has been explored by means of interviewing people about their motivations for doing so. As a result of this research it has indeed been found that people sometimes listen to sad music when feeling sad to intensify feelings of sadness. Other reasons for listening to sad music when feeling sad were; in order to retrieve memories, to feel closer to other people, for cognitive reappraisal, to feel befriended by the music, to distract oneself, and for mood enhancement.

Researchers have also found an effect between one's familiarity with a piece of music and the emotions it elicits. In one study, half of participants were played twelve random musical excerpts one time, and rated their emotions after each piece. The other half of the participants listened to twelve random excerpts five times, and started their ratings on the third repetition. Findings showed that participants who listened to the excerpts five times rated their emotions with higher intensity than the participants who listened to them only once. This suggests that familiarity with a piece of music increases the emotions experienced by the listener.

Emotional memories and actions

Music may not only elicit new emotions, but connect listeners with other emotional sources. Music serves as a powerful cue to recall emotional memories back into awareness. Because music is such a pervasive part of social life, present in weddings, funerals and religious ceremonies, it brings back emotional memories that are often already associated with it. Music is also processed by the lower, sensory levels of the brain, making it impervious to later memory distortions. Therefore creating a strong connection between emotion and music within memory makes it easier to recall one when prompted by the other. Music can also tap into empathy, inducing emotions that are assumed to be felt by the performer or composer. Listeners can become sad because they recognize that those emotions must have been felt by the composer, much as the viewer of a play can empathize for the actors. 

Listeners may also respond to emotional music through action. Throughout history music was composed to inspire people into specific action - to march, dance, sing or fight. Consequently, heightening the emotions in all these events. In fact, many people report being unable to sit still when certain rhythms are played, in some cases even engaging in subliminal actions when physical manifestations should be suppressed. Examples of this can be seen in young children's spontaneous outbursts into motion upon hearing music, or exuberant expressions shown at concerts.

Juslin & Västfjäll's BRECVEM model

Juslin & Västfjäll developed a model of seven ways in which music can elicit emotion, called the BRECVEM model. 

Brain Stem Reflex: 'This refers to a process whereby an emotion is induced by music because one or more fundamental acoustical characteristics of the music are taken by the brain stem to signal a potentially important and urgent event. All other things being equal, sounds that are sudden, loud, dissonant, or feature fast temporal patterns induce arousal or feelings of unpleasantness in listeners...Such responses reflect the impact of auditory sensations – music as sound in the most basic sense.' 

Rhythmic Entrainment: 'This refers to a process whereby an emotion is evoked by a piece of music because a powerful, external rhythm in the music influences some internal bodily rhythm of the listener (e.g. heart rate), such that the latter rhythm adjusts toward and eventually 'locks in' to a common periodicity. The adjusted heart rate can then spread to other components of emotion such as feeling, through proprioceptive feedback. This may produce an increased level of arousal in the listener.' 

Evaluative Conditioning: 'This refers to a process whereby an emotion is induced by a piece of music simply because this stimulus has been paired repeatedly with other positive or negative stimuli. Thus, for instance, a particular piece of music may have occurred repeatedly together in time with a specific event that always made you happy (e.g., meeting your best friend). Over time, through repeated pairings, the music will eventually come to evoke happiness even in the absence of the friendly interaction.' 

Emotional Contagion: 'This refers to a process whereby an emotion is induced by a piece of music because the listener perceives the emotional expression of the music, and then "mimics" this expression internally, which by means of either peripheral feedback from muscles, or a more direct activation of the relevant emotional representations in the brain, leads to an induction of the same emotion.' 

Visual Imagery: 'This refers to a process whereby an emotion is induced in a listener because he or she conjures up visual images (e.g., of a beautiful landscape) while listening to the music.' 

Episodic memory: 'This refers to a process whereby an emotion is induced in a listener because the music evokes a memory of a particular event in the listener's life. This is sometimes referred to as the "Darling, they are playing our tune" phenomenon.'

Musical expectancy: 'This refers to a process whereby an emotion is induced in a listener because a specific feature of the music violates, delays, or confirms the listener's expectations about the continuation of the music.'

Musical expectancy

With regards to violations of expectation in music several interesting results have been found. It has for example been found that listening to unconventional music may sometimes cause a meaning threat and result in compensatory behaviour in order to restore meaning.

Aesthetic Judgement and BRECVEMA

In 2013, Juslin created an additional aspect to the BRECVEM model called aesthetic judgement. This is the criteria which each individual has as a metric for music's aesthetic value. This can involve a number of varying personal preferences, such as the message conveyed, skill presented or novelty of style or idea.

Comparison of conveyed and elicited emotions

Evidence for emotion in music

There has been a bulk of evidence that listeners can identify specific emotions with certain types of music, but there has been less concrete evidence that music may elicit emotions. This is due to the fact that elicited emotion is subjective; and thus, it is difficult to find a valid criterion to study it. Elicited and conveyed emotion in music is usually understood from three types of evidence: self-report, physiological responses, and expressive behavior. Researchers use one or a combination of these methods to investigate emotional reactions to music.

Self-report

The self-report method is a verbal report by the listener regarding what they are experiencing. This is the most widely used method for studying emotion and has shown that people identify emotions and personally experience emotions while listening to music. Research in the area has shown that listeners' emotional responses are highly consistent. In fact, a meta-analysis of 41 studies on music performance found that happiness, sadness, tenderness, threat, and anger were identified above chance by listeners. Another study compared untrained listeners to musically trained listeners. Both groups were required to categorize musical excerpts that conveyed similar emotions. The findings showed that the categorizations were not different between the trained and untrained; thus demonstrating that the untrained listeners are highly accurate in perceiving emotion. It is more difficult to find evidence for elicited emotion, as it depends solely on the subjective response of the listener. This leaves reporting vulnerable to self-report biases such as participants responding according to social prescriptions or responding as they think the experimenter wants them to. As a result, the validity of the self-report method is often questioned, and consequently researchers are reluctant to draw definitive conclusions solely from these reports.

Physiological responses

Emotions are known to create physiological, or bodily, changes in a person, which can be tested experimentally. Some evidence shows one of these changes is within the nervous system. Arousing music is related to increased heart rate and muscle tension; calming music is connected to decreased heart rate and muscle tension, and increased skin temperature. Other research identifies outward physical responses such as shivers or goose bumps to be caused by changes in harmony and tears or lump-in-the-throat provoked by changes in melody. Researchers test these responses through the use of instruments for physiological measurement, such as recording pulse rate.

Expressive behavior

People are also known to show outward manifestations of their emotional states while listening to music. Studies using facial electromyography (EMG) have found that people react with subliminal facial expressions when listening to expressive music. In addition, music provides a stimulus for expressive behavior in many social contexts, such as concerts, dances, and ceremonies. Although these expressive behaviors can be measured experimentally, there have been very few controlled studies observing this behavior.

Strength of effects

Within the comparison between elicited and conveyed emotions, researchers have examined the relationship between these two types of responses to music. In general, research agrees that feeling and perception ratings are highly correlated, but not identical. More specifically, studies are inconclusive as to whether one response has a stronger effect than the other, and in what ways these two responses relate.

Conveyed more than elicited

In one study, participants heard a random selection of 24 excerpts, displaying six types of emotions, five times in a row. Half the participants described the emotions the music conveyed, and the other half responded with how the music made them feel. The results found that emotions conveyed by music were more intense than the emotions elicited by the same piece of music. Another study investigated under what specific conditions strong emotions were conveyed. Findings showed that ratings for conveyed emotions were higher in happy responses to music with consistent cues for happiness (i.e., fast tempo and major mode), for sad responses to music with consistent cues for sadness (i.e., slow tempo and minor mode,) and for sad responses in general. These studies suggest that people can recognize the emotion displayed in music more readily than feeling it personally.

Sometimes conveyed, sometimes elicited

Another study that had 32 participants listen to twelve musical pieces and found that the strength of perceived and elicited emotions were dependent on the structures of the piece of music. Perceived emotions were stronger than felt emotions when listeners rated for arousal and positive and negative activation. On the other hand, elicited emotions were stronger than perceived emotions when rating for pleasantness.

Elicited more than conveyed

In another study analysis revealed that emotional responses were stronger than the listeners' perceptions of emotions. This study used a between-subjects design, where 20 listeners judged to what extent they perceived four emotions: happy, sad, peaceful, and scared. A separate 19 listeners rated to what extent they experienced each of these emotions. The findings showed that all music stimuli elicited specific emotions for the group of participants rating elicited emotion, while music stimuli only occasionally conveyed emotion to the participants in the group identifying which emotions the music conveyed. Based on these inconsistent findings, there is much research left to be done in order to determine how conveyed and elicited emotions are similar and different. There is disagreement about whether music induces 'true' emotions or if the emotions reported as felt in studies are instead just participants stating the emotions found in the music they are listening to.

Music as a therapeutic tool

Music therapy as a therapeutic tool has been shown to be an effective treatment for various ailments. Therapeutic techniques involve eliciting emotions by listening to music, composing music or lyrics and performing music.

Music therapy sessions may have the ability to help drug users who are attempting to break a drug habit, with users reporting feeling better able to feel emotions without the aid of drug use. Music therapy may also be a viable option for people experiencing extended stays in a hospital due to illness. In one study, music therapy provided child oncology patients with enhanced environmental support elements and elicited more engaging behaviors from the child. When treating troubled teenagers, a study by Keen revealed that music therapy has allowed therapists to interact with teenagers with less resistance, thus facilitating self-expression in the teenager.

Music therapy has also shown great promise in individuals with autism, serving as an emotional outlet for these patients. While other avenues of emotional expression and understanding may be difficult for people with autism, music may provide those with limited understanding of socio-emotional cues a way of accessing emotion.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...