Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the very first experiments and innovations with electronic instruments at the turn of the 20th century.
In the 2000s, with the widespread availability of relatively affordable home computers that have a fast processing speed, and the growth of home recording using digital audio recording systems ranging from Garageband to Protools, the term is sometimes used to describe music that has been created using digital technology.
In the 2000s, with the widespread availability of relatively affordable home computers that have a fast processing speed, and the growth of home recording using digital audio recording systems ranging from Garageband to Protools, the term is sometimes used to describe music that has been created using digital technology.
History
Much of the work on computer music has drawn on the relationship between music and mathematics, a relationship which has been noted since the Ancient Greeks described the "harmony of the spheres".
Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamed CSIRAC)
in Australia in 1950. There were newspaper reports from America and
England (early and recently) that computers may have played music
earlier, but thorough research has debunked these stories as there is no
evidence to support the newspaper reports (some of which were obviously
speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises, but there is no evidence that they actually did it.
The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built by Trevor Pearcey
and Maston Beard from the late 1940s. Mathematician Geoff Hill
programmed the CSIR Mark 1 to play popular musical melodies from the
very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the
first known use of a digital computer for the purpose. The music was
never recorded, but it has been accurately reconstructed. In 1951 it publicly played the "Colonel Bogey March"
of which only the reconstruction exists. However, the CSIR Mark 1
played standard repertoire and was not used to extend musical thinking
or composition practice, as Max Mathews did, which is current computer-music practice.
The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark I, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Ba, Ba Black Sheep, and "In the Mood" and this is recognised as the earliest recording of a computer to play music. This recording can be heard at the this Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on Soundcloud.
Two further major 1950s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendants, further popularizing computer music through a 1963 article in Science. Among other pioneers, the musical chemists Lejaren Hiller
and Leonard Isaacson worked on a series of algorithmic composition
experiments from 1956-9, manifested in the 1957 premiere of the Illiac Suite for string quartet.
In Japan, experiments in computer music date back to 1962, when Keio University professor Sekine and Toshiba engineer Hayashi experimented with the TOSBAC computer. This resulted in a piece entitled TOSBAC Suite, influenced by the Illiac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo '70
and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki
also published an article called "Contemporary Music and Computers" in
1970. Since then, Japanese research in computer music has largely been
carried out for commercial purposes in popular music, though some of the more serious Japanese musicians used large computer systems such as the Fairlight in the 1970s.
Early computer-music programs typically did not run in real time. Programs would run for hours or days, on multimillion-dollar computers, to generate a few minutes of music. One way around this was to use a 'hybrid system', most notably the Roland MC-8 Microcomposer, where a microprocessor-based system controls an analog synthesizer, released in 1978. John Chowning's work on FM synthesis from the 1960s to the 1970s allowed much more efficient digital synthesis, eventually leading to the development of the affordable FM synthesis-based Yamaha DX7 digital synthesizer, released in 1983. In addition to the Yamaha DX7, the advent of inexpensive digital chips and microcomputers opened the door to real-time generation of computer music. In the 1980s, Japanese personal computers such as the NEC PC-88 came installed with FM synthesis sound chips and featured audio programming languages such as Music Macro Language (MML) and MIDI interfaces, which were most often used to produce video game music, or chiptunes.
By the early 1990s, the performance of microprocessor-based computers
reached the point that real-time generation of computer music using more
general programs and algorithms became possible.
Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.
Advances
Advances
in computing power and software for manipulation of digital media have
dramatically affected the way computer music is generated and performed.
Current-generation micro-computers are powerful enough to perform very
sophisticated audio synthesis using a wide variety of algorithms and
approaches. Computer music systems and approaches are now ubiquitous,
and so firmly embedded in the process of creating music that we hardly
give them a second thought: computer-based synthesizers, digital mixers,
and effects units have become so commonplace that use of digital rather
than analog technology to create and record music is the norm, rather
than the exception.
Research
Despite
the ubiquity of computer music in contemporary culture, there is
considerable activity in the field of computer music, as researchers
continue to pursue new and interesting computer-based synthesis,
composition, and performance approaches. Throughout the world there are
many organizations and institutions dedicated to the area of computer
and electronic music study and research, including the ICMA (International Computer Music Association), C4DM (Center for Digital Music), IRCAM, GRAME, SEAMUS (Society for Electro Acoustic Music in the United States), CEC (Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world.
Computer-generated music
Computer-generated music is music composed
by, or with the extensive aid of, a computer. Although any music which
uses computers in its composition or realisation is computer-generated
to some extent, the use of computers is now so widespread (in the
editing of pop songs, for instance) that the phrase computer-generated
music is generally used to mean a kind of music which could not have
been created without the use of computers.
We can distinguish two groups of computer-generated music: music
in which a computer generated the score, which could be performed by
humans, and music which is both composed and performed by computers.
There is a large genre of music that is organized, synthesized, and
created on computers.
Music composed and performed by computers
Later, composers such as Gottfried Michael Koenig had computers generate the sounds of the composition as well as the score. Koenig produced algorithmic composition programs which were a generalisation of his own serial composition
practice. This is not exactly similar to Xenakis' work as he used
mathematical abstractions and examined how far he could explore these
musically. Koenig's software translated the calculation of mathematical
equations into codes which represented musical notation. This could be
converted into musical notation by hand and then performed by human
players. His programs Project 1 and Project 2 are examples of this kind
of software. Later, he extended the same kind of principles into the
realm of synthesis, enabling the computer to produce the sound directly.
SSP is an example of a program which performs this kind of function.
All of these programs were produced by Koenig at the Institute of Sonology in Utrecht in the 1970s.
Procedures such as those used by Koenig and Xenakis are still in use today. Since the invention of the MIDI
system in the early 1980s, for example, some people have worked on
programs which map MIDI notes to an algorithm and then can either output
sounds or music through the computer's sound card or write an audio file for other programs to play.
Some of these simple programs are based on fractal geometry, and can map midi notes to specific fractals,
or fractal equations. Although such programs are widely available and
are sometimes seen as clever toys for the non-musician, some
professional musicians have given them attention also. The resulting
'music' can be more like noise, or can sound quite familiar and
pleasant. As with much algorithmic music, and algorithmic art
in general, more depends on the way in which the parameters are mapped
to aspects of these equations than on the equations themselves. Thus,
for example, the same equation can be made to produce both a lyrical and
melodic piece of music in the style of the mid-nineteenth century, and a
fantastically dissonant cacophony more reminiscent of the avant-garde music of the 1950s and 1960s.
Other programs can map mathematical formulae and constants to produce sequences of notes. In this manner, an irrational number
can give an infinite sequence of notes where each note is a digit in
the decimal expression of that number. This sequence can in turn be a
composition in itself, or simply the basis for further elaboration.
Operations such as these, and even more elaborate operations can
also be performed in computer music programming languages such as Max/MSP, Reaktor, SuperCollider, Csound, Pure Data (Pd), Keykit, and ChucK.
These programs now easily run on most personal computers, and are often
capable of more complex functions than those which would have
necessitated the most powerful mainframe computers several decades ago.
There exist programs that generate "human-sounding" melodies by using a vast database of phrases. One example is Band-in-a-Box, which is capable of creating jazz, blues and rock instrumental solos with almost no user interaction. Another is Impro-Visor, which uses a stochastic context-free grammar to generate phrases and complete solos.
Another 'cybernetic' approach to computer composition uses
specialized hardware to detect external stimuli which are then mapped by
the computer to realize the performance. Examples of this style of
computer music can be found in the middle-80's work of David Rokeby
(Very Nervous System) where audience/performer motions are 'translated'
to MIDI segments. Computer controlled music is also found in the
performance pieces by the Canadian composer Udo Kasemets such as the Marce(ntennia)l Circus C(ag)elebrating Duchamp (1987), a realization of the Marcel Duchamp process piece Erratum Musical
using an electric model train to collect a hopper-car of stones to be
deposited on a drum wired to an Analog:Digital converter, mapping the
stone impacts to a score display (performed in Toronto by pianist Gordon Monahan
during the 1987 Duchamp Centennial), or his installations and
performance works (e.g. Spectrascapes) based on his Geo(sono)scope
(1986) 15x4-channel computer-controlled audio mixer. In these latter
works, the computer generates sound-scapes from tape-loop sound samples,
live shortwave or sine-wave generators.
Computer-generated scores for performance by human players
Many systems for generating musical scores actually existed well before the time of computers. One of these was Musikalisches Würfelspiel (Musical dice game;
18th century), a system which used throws of the dice to randomly
select measures from a large collection of small phrases. When patched
together, these phrases combined to create musical pieces which could be
performed by human players. Although these works were not actually
composed with a computer in the modern sense, it uses a rudimentary form
of the random combinatorial techniques sometimes used in
computer-generated composition.
The world's first digital computer music was generated in Australia by programmer Geoff Hill on the CSIRAC
computer which was designed and built by Trevor Pearcey and Maston
Beard, although it was only used to play standard tunes of the day.
Subsequently, one of the first composers to write music with a computer
was Iannis Xenakis. He wrote programs in the FORTRAN language that generated numeric data that he transcribed into scores to be played by traditional musical instruments. An example is ST/48
of 1962. Although Xenakis could well have composed this music by hand,
the intensity of the calculations needed to transform probabilistic
mathematics into musical notation was best left to the number-crunching
power of the computer.
Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope.
He wrote computer programs that analyse works of other composers to
produce new works in a similar style. He has used this program to great
effect with composers such as Bach and Mozart (his program Experiments in Musical Intelligence
is famous for creating "Mozart's 42nd Symphony"), and also within his
own pieces, combining his own creations with that of the computer.
Melomics, a research project from the University of Málaga, Spain, developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, appropriately named Iamus, which New Scientist described as "The first major work composed by a computer and performed by a full orchestra." The group has also developed an API for developers to utilize the technology, and makes its music available on its website.
Computer-aided algorithmic composition
Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label computer-aided composition
lacks the specificity of using generative algorithms. Music produced
with notation or sequencing software could easily be considered
computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design.
Machine improvisation
Machine improvisation uses computer algorithms to create improvisation
on existing music materials. This is usually done by sophisticated
recombination of musical phrases extracted from existing music, either
live or pre-recorded. In order to achieve credible improvisation in
particular style, machine improvisation uses machine learning and pattern matching
algorithms to analyze existing musical examples. The resulting patterns
are then used to create new variations "in the style" of the original
music, developing a notion of stylistic reinjection.
This is different from other improvisation methods with computers that
use algorithmic composition to generate new music without performing analysis of existing music examples .
Statistical style modeling
Style
modeling implies building a computational representation of the musical
surface that captures important stylistic features from data.
Statistical approaches are used to capture the redundancies in terms of
pattern dictionaries or repetitions, which are later recombined to
generate new musical data. Style mixing can be realized by analysis of a
database containing multiple musical examples in different styles.
Machine Improvisation builds upon a long musical tradition of
statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree and string searching by factor oracle algorithm (basically a factor oracle is a finite state automaton constructed in linear time and space in an incremental fashion).
Uses of machine improvisation
Machine
improvisation encourages musical creativity by providing automatic
modeling and transformation structures for existing music.
This creates a natural interface with the musician without need for
coding musical algorithms. In live performance, the system re-injects
the musician's material in several different ways, allowing a
semantics-level representation of the session and a smart recombination
and transformation of this material in real-time. In offline version,
machine improvisation can be used to achieve style mixing, an approach
inspired by Vannevar Bush's memex imaginary machine.
Implementations
The first system implementing interactive machine improvisation by means of Markov models and style modeling techniques is the Continuator, developed by François Pachet at Sony CSL Paris in 2002 based on work on non-real time style modeling.
Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation.
OMax is a software environment developed in IRCAM. OMax uses OpenMusic
and Max. It is based on researches on stylistic modeling carried out by
Gerard Assayag and Shlomo Dubnov and on researches on improvisation
with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the OMax Brothers) in the Ircam Music Representations group.
Musicians working with machine improvisation
Gerard Assayag (IRCAM, France),
Jeremy Baguyos (University of Nebraska at Omaha, US)
Tim Blackwell (Goldsmiths College, Great Britain),
George Bloch (Composer, France),
Marc Chemiller (IRCAM/CNRS, France),
Nick Collins (University of Sussex, UK),
Shlomo Dubnov (Composer, Israel / US),
Mari Kimura (Juilliard, New York City),
Amanuel Zarzowski (Composer Los Angeles/San Diego),
George Lewis (Columbia University, New York City),
Bernard Lubat (Pianist, France),
François Pachet (Sony CSL, France),
Joel Ryan (Institute of Sonology, Netherlands),
Michel Waisvisz (STEIM, Netherlands),
David Wessel (CNMAT, California),
Michael Young (Goldsmiths College, Great Britain),
Pietro Grossi (CNUCE, Institute of the National Research Council, Pisa, Italy),
Toby Gifford and Andrew Brown (Griffith University, Brisbane, Australia),
Davis Salks (jazz composer, Hamburg, PA, US),
Doug Van Nort (electroacoustic improviser, Montreal/New York).
Live coding
Live coding (sometimes known as 'interactive programming', 'on-the-fly programming', 'just in time programming') is the name given to the process of writing software in realtime as part of a performance.
Recently it has been explored as a more rigorous alternative to laptop
musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.
Generally, this practice stages a more general approach: one of
interactive programming, of writing (parts of) programs while they are
interpreted. Traditionally most computer music programs have tended
toward the old write/compile/run model which evolved when computers were
much less powerful. This approach has locked out code-level innovation
by people whose programming skills are more modest. Some programs have
gradually integrated real-time controllers and gesturing (for example, MIDI-driven
software synthesis and parameter control). Until recently, however, the
musician/composer rarely had the capability of real-time modification
of program code itself. This legacy distinction is somewhat erased by
languages such as ChucK, SuperCollider, and Impromptu.
TOPLAP,
an ad-hoc conglomerate of artists interested in live coding was formed
in 2004, and promotes the use, proliferation and exploration of a range
of software, languages and techniques to implement live coding. This is a
parallel and collaborative effort e.g. with research at the Princeton Sound Lab, the University of Cologne, and the Computational Arts Research Group at Queensland University of Technology.