Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.
Cognitive musicology can be differentiated from other branches of music psychology via its methodological emphasis, using computer modeling to study music-related knowledge representation with roots in artificial intelligence and cognitive science. The use of computer models provides an exacting, interactive medium in which to formulate and test theories.
This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs. This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated.
Even while enjoying the simplest of melodies there are multiple brain processes that are synchronizing to comprehend what is going on. After the stimulus enters and undergoes the processes of the ear, it enters the auditory cortex, part of the temporal lobe, which begins processing the sound by assessing its pitch and volume. From here, brain functioning differs amongst the analysis of different aspects of music. For instance, the rhythm is processed and regulated by the left frontal cortex, the left parietal cortex and the right cerebellum standardly. Tonality, the building of musical structure around a central chord, is assessed by the prefrontal cortex and cerebellum (Abram, 2015). Music is able to access many different brain functions that play an integral role in other higher brain functions such as motor control, memory, language, reading and emotion. Research has shown that music can be used as an alternative method to access these functions that may be unavailable through non-musical stimulus due to a disorder. Musicology explores the use of music and how it can provide alternative transmission routes for information processing in the brain for diseases such as Parkinson's and dyslexia as well.
Cognitive musicology can be differentiated from other branches of music psychology via its methodological emphasis, using computer modeling to study music-related knowledge representation with roots in artificial intelligence and cognitive science. The use of computer models provides an exacting, interactive medium in which to formulate and test theories.
This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs. This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated.
Even while enjoying the simplest of melodies there are multiple brain processes that are synchronizing to comprehend what is going on. After the stimulus enters and undergoes the processes of the ear, it enters the auditory cortex, part of the temporal lobe, which begins processing the sound by assessing its pitch and volume. From here, brain functioning differs amongst the analysis of different aspects of music. For instance, the rhythm is processed and regulated by the left frontal cortex, the left parietal cortex and the right cerebellum standardly. Tonality, the building of musical structure around a central chord, is assessed by the prefrontal cortex and cerebellum (Abram, 2015). Music is able to access many different brain functions that play an integral role in other higher brain functions such as motor control, memory, language, reading and emotion. Research has shown that music can be used as an alternative method to access these functions that may be unavailable through non-musical stimulus due to a disorder. Musicology explores the use of music and how it can provide alternative transmission routes for information processing in the brain for diseases such as Parkinson's and dyslexia as well.
Notable researchers
The polymath Christopher Longuet-Higgins,
who coined the term "cognitive science", is one of the pioneers of
cognitive musicology. Among other things, he is noted for the
computational implementation of an early key-finding algorithm.
Key finding is an essential element of tonal music, and the key-finding
problem has attracted considerable attention in the psychology of music
over the past several decades. Carol Krumhansl and Mark Schmuckler
proposed an empirically grounded key-finding algorithm which bears their
names.
Their approach is based on key-profiles which were painstakingly
determined by what has come to be known as the probe-tone technique.
This algorithm has successfully been able to model the perception of
musical key in short excerpts of music, as well as to track listeners'
changing sense of key movement throughout an entire piece of music.
David Temperley, whose early work within the field of cognitive
musicology applied dynamic programming to aspects of music cognition,
has suggested a number of refinements to the Krumhansl-Schmuckler
Key-Finding Algorithm.
Otto Laske was a champion of cognitive musicology.
A collection of papers that he co-edited served to heighten the
visibility of cognitive musicology and to strengthen its association
with AI and music. The foreword of this book reprints a free-wheeling interview with Marvin Minsky, one of the founding fathers of AI, in which he discusses some of his early writings on music and the mind.
AI researcher turned cognitive scientist Douglas Hofstadter has also
contributed a number of ideas pertaining to music from an AI
perspective.
Musician Steve Larson, who worked for a time in Hofstadter's lab,
formulated a theory of "musical forces" derived by analogy with physical
forces. Hofstadter also weighed in on David Cope's experiments in musical intelligence, which take the form of a computer program called EMI which produces music in the form of, say, Bach, or Chopin, or Cope.
Cope's programs are written in Lisp,
which turns out to be a popular language for research in cognitive
musicology. Desain and Honing have exploited Lisp in their efforts to
tap the potential of microworld methodology in cognitive musicology
research. Also working in Lisp, Heinrich Taube has explored computer composition from a wide variety of perspectives.
There are, of course, researchers who chose to use languages other
than Lisp for their research into the computational modeling of musical
processes. Tim Rowe, for example, explores "machine musicianship"
through C++ programming.
A rather different computational methodology for researching musical
phenomena is the toolkit approach advocated by David Huron.
At a higher level of abstraction, Gerraint Wiggins has investigated
general properties of music knowledge representations such as structural
generality and expressive completeness.
Although a great deal of cognitive musicology research features
symbolic computation, notable contributions have been made from the
biologically inspired computational paradigms. For example, Jamshed Bharucha and Peter Todd have modeled music perception in tonal music with neural networks. Al Biles has applied genetic algorithms to the composition of jazz solos. Numerous researchers have explored algorithmic composition grounded in a wide range of mathematical formalisms.
Within cognitive psychology, among the most prominent researchers is Diana Deutsch,
who has engaged in a wide variety of work ranging from studies of
absolute pitch and musical illusions to the formulation of musical
knowledge representations to relationships between music and language. Equally important is Aniruddh D. Patel, whose work combines traditional methodologies of cognitive psychology with neuroscience. Patel is also the author of a comprehensive survey of cognitive science research on music.
Perhaps the most significant contribution to viewing music from a linguistic perspective is the Generative Theory of Tonal Music (GTTM) proposed by Fred Lerdahl and Ray Jackendoff.
Although GTTM is presented at the algorithmic level of abstraction
rather than the implementational level, their ideas have found
computational manifestations in a number of computational projects.
For the German-speaking area, Laske's conception of cognitive musicology has been advanced by Uwe Seifert in his book Systematische Musiktheorie und Kognitionswissenschaft. Zur Grundlegung der kognitiven Musikwissenschaft ("Systematic music theory and cognitive science. The foundation of cognitive musicology") and subsequent publications.
Music and language acquisition skills
Both
music and speech rely on sound processing and require interpretation of
several sound features such as timbre, pitch, duration, and their
interactions (Elzbieta, 2015). A fMRI study revealed that the Broca's
and Wernicke's areas, two areas that are known to activated during
speech and language processing, were found activated while the subject
was listening to unexpected musical chords (Elzbieta, 2015). This
relation between language and music may explain why, it has been found
that exposure to music has produced an acceleration in the development
of behaviors related to the acquisition of language. The Suzuki music
education which is very widely known, emphasizes learning music by ear
over reading musical notation and preferably begins with formal lessons
between the ages of 3 and 5 years. One fundamental reasoning in favor of
this education points to a parallelism between natural speech
acquisition and purely auditory based musical training as opposed to
musical training due to visual cues. There is evidence that children who
take music classes have obtained skills to help them in language
acquisition and learning (Oechslin, 2015), an ability that relies
heavily on the dorsal pathway. Other studies show an overall enhancement
of verbal intelligence in children taking music classes. Since both
activities tap into several integrated brain functions and have shared
brain pathways it is understandable why strength in music acquisition
might also correlate with strength in language acquisition.
Music and pre-natal development
Extensive
prenatal exposure to a melody has been shown to induce neural
representations that last for several months. In a study done by
Partanen in 2013, mothers in the learning group listened to the ‘Twinkle
twinkle little star' melody 5 times per week during their last
trimester. After birth and again at the age of 4 months, they played the
infants in the control and learning group a modified melody in which
some of the notes were changed. Both at birth and at the age of 4
months, infants in the learning group had stronger event related potentials to the unchanged notes than the control group.
Since listening to music at a young age can already map out neural
representations that are lasting, exposure to music could help
strengthen brain plasticity in areas of the brain that are involved in
language and speech processing.
Music therapy effect on cognitive disorders
If
neural pathways can be stimulated with entertainment there is a higher
chance that it will be more easily accessible. This illustrates why
music is so powerful and can be used in such a myriad of different
therapies. Music that is enjoyable to a person illicit an interesting
response that we are all aware of. Listening to music is not perceived
as a chore because it is enjoyable, however our brain is still learning
and utilizing the same brain functions as it would when speaking or
acquiring language. Music has the capability to be a very productive
form of therapy mostly because it is stimulating, entertaining, and
appears rewarding. Using fMRI, Menon and Levitin found for the first
time that listening to music strongly modulates activity in a network of
mesolimbic structures involved in reward processing. This included the
nucleus accumbens and the ventral tegmental area (VTA), as well as the
hypothalamus, and insula, which are all thought to be involved in
regulating autonomic and physiological responses to rewarding and
emotional stimuli (Gold, 2013).
Pitch perception was positively correlated with phonemic
awareness and reading abilities in children (Flaugnacco, 2014).
Likewise, the ability to tap to a rhythmic beat correlated with
performance on reading and attention tests (Flaugnacco, 2014). These are
only a fraction of the studies that have linked reading skills with
rhythmic perception, which is shown in a meta-analysis of 25
cross-sectional studies that found a significant association between
music training and reading skills (Butzlaff, 2000). Since the
correlation is so extensive it is natural that researchers have tried to
see if music could serve as an alternative pathway to strengthen
reading abilities in people with developmental disorders such as
dyslexia. Dyslexia is a disorder characterized by a long lasting
difficulty in reading acquisition, specifically text decoding. Reading
results have been shown to be slow and inaccurate, despite adequate
intelligence and instruction. The difficulties have been shown to stem
from a phonological core deficit that impacts reading comprehension,
memory and prediction abilities (Flaugnacco, 2014). It was shown that
music training modified reading and phonological abilities even when
these skills are severely impaired. By improving temporal processing and
rhythm abilities, through training, phonological awareness and reading
skills in children with dyslexia were improved. The OPERA hypothesis
proposed by Patel (2011), states that since music places higher demands
on the process than speech it brings adaptive brain plasticity of the
same neural network involved in language processing.
Parkinson's disease is a complex neurological disorder that
negatively impacts both motor and non-motor functions caused by the
degeneration of dopaminergic (DA) neurons in the substantia nigra
(Ashoori, 2015). This in turn leads to a DA deficiency in the basal
ganglia. The deficiencies of dopamine in these areas of the brain have
shown to cause symptoms such as tremors at rest, rigidity, akinesia, and
postural instability. They are also associated with impairments of
internal timing of an individual (Ashoori, 2015). Rhythm is a powerful
sensory cue that has shown to help regulate motor timing and
coordination when there is a deficient internal timing system in the
brain. Some studies have shown that musically cued gait training
significantly improves multiple deficits of Parkinson's, including in
gait, motor timing, and perceptual timing. Ashoori's study consisted of
15 non-demented patients with idiopathic Parkinson's who had no prior
musical training and maintained their dopamine therapy during the
trials. There were three 30-min training sessions per week for 1 month
where the participants walked to the beats of German folk music without
explicit instructions to synchronize their footsteps to the beat.
Compared to pre-training gait performance, the Parkinson's patients
showed significant improvement in gait velocity and stride length during
the training sessions. The gait improvement was sustained for 1 month
after training, which indicates a lasting therapeutic effect. Even
though this was uncued it shows how the gait of these Parkinson's
patients was automatically synchronized with the rhythm of the music.
The lasting therapeutic effect also shows that this might have affected
the internal timing of the individual in a way that could not be
accessed by other means.