Search This Blog

Tuesday, January 15, 2019

Computer music

From Wikipedia, the free encyclopedia

Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the very first experiments and innovations with electronic instruments at the turn of the 20th century.

In the 2000s, with the widespread availability of relatively affordable home computers that have a fast processing speed, and the growth of home recording using digital audio recording systems ranging from Garageband to Protools, the term is sometimes used to describe music that has been created using digital technology.

History

CSIRAC, Australia's first digital computer, as displayed at the Melbourne Museum
 
Much of the work on computer music has drawn on the relationship between music and mathematics, a relationship which has been noted since the Ancient Greeks described the "harmony of the spheres".
Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamed CSIRAC) in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were obviously speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises, but there is no evidence that they actually did it.

The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built by Trevor Pearcey and Maston Beard from the late 1940s. Mathematician Geoff Hill programmed the CSIR Mark 1 to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for the purpose. The music was never recorded, but it has been accurately reconstructed. In 1951 it publicly played the "Colonel Bogey March" of which only the reconstruction exists. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, as Max Mathews did, which is current computer-music practice. 

The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark I, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Ba, Ba Black Sheep, and "In the Mood" and this is recognised as the earliest recording of a computer to play music. This recording can be heard at the this Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on Soundcloud.

Two further major 1950s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendants, further popularizing computer music through a 1963 article in Science. Among other pioneers, the musical chemists Lejaren Hiller and Leonard Isaacson worked on a series of algorithmic composition experiments from 1956-9, manifested in the 1957 premiere of the Illiac Suite for string quartet.

In Japan, experiments in computer music date back to 1962, when Keio University professor Sekine and Toshiba engineer Hayashi experimented with the TOSBAC computer. This resulted in a piece entitled TOSBAC Suite, influenced by the Illiac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo '70 and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in 1970. Since then, Japanese research in computer music has largely been carried out for commercial purposes in popular music, though some of the more serious Japanese musicians used large computer systems such as the Fairlight in the 1970s.

The programming computer for Yamaha's first FM synthesizer GS1. CCRMA, Stanford University
 
Early computer-music programs typically did not run in real time. Programs would run for hours or days, on multimillion-dollar computers, to generate a few minutes of music. One way around this was to use a 'hybrid system', most notably the Roland MC-8 Microcomposer, where a microprocessor-based system controls an analog synthesizer, released in 1978. John Chowning's work on FM synthesis from the 1960s to the 1970s allowed much more efficient digital synthesis, eventually leading to the development of the affordable FM synthesis-based Yamaha DX7 digital synthesizer, released in 1983. In addition to the Yamaha DX7, the advent of inexpensive digital chips and microcomputers opened the door to real-time generation of computer music. In the 1980s, Japanese personal computers such as the NEC PC-88 came installed with FM synthesis sound chips and featured audio programming languages such as Music Macro Language (MML) and MIDI interfaces, which were most often used to produce video game music, or chiptunes. By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible.
Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.

Advances

Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception.

Research

Despite the ubiquity of computer music in contemporary culture, there is considerable activity in the field of computer music, as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer and electronic music study and research, including the ICMA (International Computer Music Association), C4DM (Center for Digital Music), IRCAM, GRAME, SEAMUS (Society for Electro Acoustic Music in the United States), CEC (Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world.

Computer-generated music

Computer-generated music is music composed by, or with the extensive aid of, a computer. Although any music which uses computers in its composition or realisation is computer-generated to some extent, the use of computers is now so widespread (in the editing of pop songs, for instance) that the phrase computer-generated music is generally used to mean a kind of music which could not have been created without the use of computers.

We can distinguish two groups of computer-generated music: music in which a computer generated the score, which could be performed by humans, and music which is both composed and performed by computers. There is a large genre of music that is organized, synthesized, and created on computers.

Music composed and performed by computers

Later, composers such as Gottfried Michael Koenig had computers generate the sounds of the composition as well as the score. Koenig produced algorithmic composition programs which were a generalisation of his own serial composition practice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at the Institute of Sonology in Utrecht in the 1970s.

Procedures such as those used by Koenig and Xenakis are still in use today. Since the invention of the MIDI system in the early 1980s, for example, some people have worked on programs which map MIDI notes to an algorithm and then can either output sounds or music through the computer's sound card or write an audio file for other programs to play.

Some of these simple programs are based on fractal geometry, and can map midi notes to specific fractals, or fractal equations. Although such programs are widely available and are sometimes seen as clever toys for the non-musician, some professional musicians have given them attention also. The resulting 'music' can be more like noise, or can sound quite familiar and pleasant. As with much algorithmic music, and algorithmic art in general, more depends on the way in which the parameters are mapped to aspects of these equations than on the equations themselves. Thus, for example, the same equation can be made to produce both a lyrical and melodic piece of music in the style of the mid-nineteenth century, and a fantastically dissonant cacophony more reminiscent of the avant-garde music of the 1950s and 1960s.

Other programs can map mathematical formulae and constants to produce sequences of notes. In this manner, an irrational number can give an infinite sequence of notes where each note is a digit in the decimal expression of that number. This sequence can in turn be a composition in itself, or simply the basis for further elaboration.

Operations such as these, and even more elaborate operations can also be performed in computer music programming languages such as Max/MSP, Reaktor, SuperCollider, Csound, Pure Data (Pd), Keykit, and ChucK. These programs now easily run on most personal computers, and are often capable of more complex functions than those which would have necessitated the most powerful mainframe computers several decades ago.

There exist programs that generate "human-sounding" melodies by using a vast database of phrases. One example is Band-in-a-Box, which is capable of creating jazz, blues and rock instrumental solos with almost no user interaction. Another is Impro-Visor, which uses a stochastic context-free grammar to generate phrases and complete solos.

Another 'cybernetic' approach to computer composition uses specialized hardware to detect external stimuli which are then mapped by the computer to realize the performance. Examples of this style of computer music can be found in the middle-80's work of David Rokeby (Very Nervous System) where audience/performer motions are 'translated' to MIDI segments. Computer controlled music is also found in the performance pieces by the Canadian composer Udo Kasemets such as the Marce(ntennia)l Circus C(ag)elebrating Duchamp (1987), a realization of the Marcel Duchamp process piece Erratum Musical using an electric model train to collect a hopper-car of stones to be deposited on a drum wired to an Analog:Digital converter, mapping the stone impacts to a score display (performed in Toronto by pianist Gordon Monahan during the 1987 Duchamp Centennial), or his installations and performance works (e.g. Spectrascapes) based on his Geo(sono)scope (1986) 15x4-channel computer-controlled audio mixer. In these latter works, the computer generates sound-scapes from tape-loop sound samples, live shortwave or sine-wave generators.

Computer-generated scores for performance by human players

Many systems for generating musical scores actually existed well before the time of computers. One of these was Musikalisches Würfelspiel (Musical dice game; 18th century), a system which used throws of the dice to randomly select measures from a large collection of small phrases. When patched together, these phrases combined to create musical pieces which could be performed by human players. Although these works were not actually composed with a computer in the modern sense, it uses a rudimentary form of the random combinatorial techniques sometimes used in computer-generated composition.

The world's first digital computer music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard, although it was only used to play standard tunes of the day. Subsequently, one of the first composers to write music with a computer was Iannis Xenakis. He wrote programs in the FORTRAN language that generated numeric data that he transcribed into scores to be played by traditional musical instruments. An example is ST/48 of 1962. Although Xenakis could well have composed this music by hand, the intensity of the calculations needed to transform probabilistic mathematics into musical notation was best left to the number-crunching power of the computer.

Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope. He wrote computer programs that analyse works of other composers to produce new works in a similar style. He has used this program to great effect with composers such as Bach and Mozart (his program Experiments in Musical Intelligence is famous for creating "Mozart's 42nd Symphony"), and also within his own pieces, combining his own creations with that of the computer.

Melomics, a research project from the University of Málaga, Spain, developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, appropriately named Iamus, which New Scientist described as "The first major work composed by a computer and performed by a full orchestra." The group has also developed an API for developers to utilize the technology, and makes its music available on its website.

Computer-aided algorithmic composition

Diagram illustrating the position of CAAC in relation to other Generative music Systems
 
Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label computer-aided composition lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design.

Machine improvisation

Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic reinjection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples .

Statistical style modeling

Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree and string searching by factor oracle algorithm (basically a factor oracle is a finite state automaton constructed in linear time and space in an incremental fashion).

Uses of machine improvisation

Machine improvisation encourages musical creativity by providing automatic modeling and transformation structures for existing music. This creates a natural interface with the musician without need for coding musical algorithms. In live performance, the system re-injects the musician's material in several different ways, allowing a semantics-level representation of the session and a smart recombination and transformation of this material in real-time. In offline version, machine improvisation can be used to achieve style mixing, an approach inspired by Vannevar Bush's memex imaginary machine.

Implementations

The first system implementing interactive machine improvisation by means of Markov models and style modeling techniques is the Continuator, developed by François Pachet at Sony CSL Paris in 2002 based on work on non-real time style modeling. Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation.

OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the OMax Brothers) in the Ircam Music Representations group.

Musicians working with machine improvisation

Gerard Assayag (IRCAM, France), Jeremy Baguyos (University of Nebraska at Omaha, US) Tim Blackwell (Goldsmiths College, Great Britain), George Bloch (Composer, France), Marc Chemiller (IRCAM/CNRS, France), Nick Collins (University of Sussex, UK), Shlomo Dubnov (Composer, Israel / US), Mari Kimura (Juilliard, New York City), Amanuel Zarzowski (Composer Los Angeles/San Diego), George Lewis (Columbia University, New York City), Bernard Lubat (Pianist, France), François Pachet (Sony CSL, France), Joel Ryan (Institute of Sonology, Netherlands), Michel Waisvisz (STEIM, Netherlands), David Wessel (CNMAT, California), Michael Young (Goldsmiths College, Great Britain), Pietro Grossi (CNUCE, Institute of the National Research Council, Pisa, Italy), Toby Gifford and Andrew Brown (Griffith University, Brisbane, Australia), Davis Salks (jazz composer, Hamburg, PA, US), Doug Van Nort (electroacoustic improviser, Montreal/New York).

Live coding

Live coding (sometimes known as 'interactive programming', 'on-the-fly programming', 'just in time programming') is the name given to the process of writing software in realtime as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.

Generally, this practice stages a more general approach: one of interactive programming, of writing (parts of) programs while they are interpreted. Traditionally most computer music programs have tended toward the old write/compile/run model which evolved when computers were much less powerful. This approach has locked out code-level innovation by people whose programming skills are more modest. Some programs have gradually integrated real-time controllers and gesturing (for example, MIDI-driven software synthesis and parameter control). Until recently, however, the musician/composer rarely had the capability of real-time modification of program code itself. This legacy distinction is somewhat erased by languages such as ChucK, SuperCollider, and Impromptu.

TOPLAP, an ad-hoc conglomerate of artists interested in live coding was formed in 2004, and promotes the use, proliferation and exploration of a range of software, languages and techniques to implement live coding. This is a parallel and collaborative effort e.g. with research at the Princeton Sound Lab, the University of Cologne, and the Computational Arts Research Group at Queensland University of Technology.

Generative art

From Wikipedia, the free encyclopedia

Condensation Cube, Plexiglas and water; Hirshhorn Museum and Sculpture Garden, begun 1965, completed 2008 by Hans Haacke
 
Iridem for trombone and clarinet, 1983 by Sergio Maltagliati
 
Interactive installation 'CIMs series, 2000 by Maurizio Bolognini
 
Installation view of Irrational Geometrics 2008 by Pascal Dombis
 
Telepresence-based installation 10.000 Moving Cities, 2016 by Marc Lee
 
Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist. In some cases the human creator may claim that the generative system represents their own artistic idea, and in others that the system takes on the role of the creator.

"Generative art" often refers to algorithmic art (algorithmically determined computer generated artwork), but artists can also make it using systems of chemistry, biology, mechanics and robotics, smart materials, manual randomization, mathematics, data mapping, symmetry, tiling, and more.

History

The use of the word "generative" in the discussion of art has developed over time. The use of "Artificial DNA" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use of autonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named "emergent". Margaret Boden and Ernest Edmonds have noted the use of the term "generative art" in the broad context of automated computer graphics in the 1960s, beginning with artwork exhibited by Georg Nees and Frieder Nake in 1965:
The terms "generative art" and "computer art" have been used in tandem, and more or less interchangeably, since the very earliest days.
The first such exhibition showed the work of Nees in February 1965, which some claim was titled "Generative Computergrafik". While Nees does not himself remember, this was the title of his doctoral thesis published a few years later. The correct title of the first exhibition and catalog was "computer-grafik". "Generative art" and related terms was in common use by several other early computer artists around this time, including Manfred Mohr. The term "Generative Art" with the meaning of dynamic artwork-systems able to generate multiple artwork-events was clearly used the first time for the "Generative Art" conference in Milan in 1998.

The term has also been used to describe geometric abstract art where simple elements are repeated, transformed, or varied to generate more complex forms. Thus defined, generative art was practised by the Argentinian artists Eduardo McEntyre and Miguel Ángel Vidal in the late 1960s. In 1972 the Romanian-born Paul Neagu created the Generative Art Group in Britain. It was populated exclusively by Neagu using aliases such as "Hunsy Belmood" and "Edward Larsocchi." In 1972 Neagu gave a lecture titled 'Generative Art Forms' at the Queen's University, Belfast Festival.

In 1970 the School of the Art Institute of Chicago created a department called "Generative Systems." As described by Sonia Landy Sheridan the focus was on art practices using the then new technologies for the capture, inter-machine transfer, printing and transmission of images, as well as the exploration of the aspect of time in the transformation of image information.

In 1988 Clauser  identified the aspect of systemic autonomy as a critical element in generative art:
It should be evident from the above description of the evolution of generative art that process (or structuring) and change (or transformation) are among its most definitive features, and that these features and the very term 'generative' imply dynamic development and motion. ...

(the result) is not a creation by the artist but rather the product of the generative process - a self-precipitating structure.
In 1989 Celestino Soddu defined the Generative Design approach to Architecture and Town Design in his book Citta' Aleatorie.

In 1989 Franke referred to "generative mathematics" as "the study of mathematical operations suitable for generating artistic images."

From the mid-1990s Brian Eno popularized the terms generative music and generative systems, making a connection with earlier experimental music by Terry Riley, Steve Reich and Philip Glass.

From the end of the 20th century, communities of generative artists, designers, musicians and theoreticians began to meet, forming cross-disciplinary perspectives. The first meeting about generative Art was in 1998, at the inaugural International Generative Art conference at Politecnico di Milano University, Italy. In Australia, the Iterate conference on generative systems in the electronic arts followed in 1999. On-line discussion has centered around the eu-gene mailing list, which began late 1999, and has hosted much of the debate which has defined the field. These activities have more recently been joined by the Generator.x conference in Berlin starting in 2005. In 2012 the new journal GASATHJ, Generative Art Science and Technology Hard Journal was founded by Celestino Soddu and Enrica Colabella  jointing several generative artists and scientists in the Editorial Board.

Some have argued that as a result of this engagement across disciplinary boundaries, the community has converged on a shared meaning of the term. As Boden and Edmonds put it in 2011:
Today, the term "Generative Art" is still current within the relevant artistic community. Since 1998 a series of conferences have been held in Milan with that title (Generativeart.com), and Brian Eno has been influential in promoting and using generative art methods (Eno, 1996). Both in music and in visual art, the use of the term has now converged on work that has been produced by the activation of a set of rules and where the artist lets a computer system take over at least some of the decision-making (although, of course, the artist determines the rules).
In the call of the Generative Art conferences in Milan (annually starting from 1998), the definition of Generative Art by Celestino Soddu:
Generative Art is the idea realized as genetic code of artificial events, as construction of dynamic complex systems able to generate endless variations. Each Generative Project is a concept-software that works producing unique and non-repeatable events, like music or 3D Objects, as possible and manifold expressions of the generating idea strongly recognizable as a vision belonging to an artist / designer / musician / architect /mathematician.
Discussion on the eu-gene mailing list was framed by the following definition by Adrian Ward from 1999:
Generative art is a term given to work which stems from concentrating on the processes involved in producing an artwork, usually (although not strictly) automated by the use of a machine or computer, or by using mathematic or pragmatic instructions to define the rules by which such artworks are executed.
A similar definition is provided by Philip Galanter:
Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art.

Types

Music

Johann Philipp Kirnberger's "Musikalisches Würfelspiel" (Musical Dice Game) 1757 is considered an early example of a generative system based on randomness. Dice were used to select musical sequences from a numbered pool of previously composed phrases. This system provided a balance of order and disorder. The structure was based on an element of order on one hand, and disorder on the other.

The fugues of J.S. Bach could be considered generative, in that there is a strict underlying process that is followed by the composer. Similarly, serialism follows strict procedures which, in some cases, can be set up to generate entire compositions with limited human intervention.

Composers such as John Cage, Farmers Manual and Brian Eno have used generative systems in their works.

Visual art

The artist Ellsworth Kelly created paintings by using chance operations to assign colors in a grid. He also created works on paper that he then cut into strips or squares and reassembled using chance operations to determine placement.

Album de 10 sérigraphies sur 10 ans, by François Morellet, 2009
 
Iapetus, by Jean-Max Albert, 1985
 
Calmoduline Monument, by Jean-Max Albert, 1991
 
Artists such as Hans Haacke have explored processes of physical and social systems in artistic context. François Morellet has used both highly ordered and highly disordered systems in his artwork. Some of his paintings feature regular systems of radial or parallel lines to create Moiré Patterns. In other works he has used chance operations to determine the coloration of grids. Sol LeWitt created generative art in the form of systems expressed in natural language and systems of geometric permutation. Harold Cohen's AARON system is a longstanding project combining software artificial intelligence with robotic painting devices to create physical artifacts. Steina and Woody Vasulka are video art pioneers who used analog video feedback to create generative art. Video feedback is now cited as an example of deterministic chaos, and the early explorations by the Vasulkas anticipated contemporary science by many years. Software systems exploiting evolutionary computing to create visual form include those created by Scott Draves and Karl Sims. The digital artist Joseph Nechvatal has exploited models of viral contagion. Autopoiesis by Ken Rinaldo includes fifteen musical and robotic sculptures that interact with the public and modify their behaviors based on both the presence of the participants and each other. Jean-Pierre Hebert and Roman Verostko are founding members of the Algorists, a group of artists who create their own algorithms to create art. A. Michael Noll, of Bell Telephone Laboratories, Incorporated, programmed computer art using mathematical equations and programmed randomness, starting in 1962. The French artist Jean-Max Albert, beside environmental sculptures like Iapetus, and O=C=O, developed a project dedicated to the vegetation itself, in terms of biological activity. The Calmoduline Monument project is based on the property of a protein, calmodulin, to bond selectively to calcium. Exterior physical constraints (wind, rain, etc.) modify the electric potential of the cellular membranes of a plant and consequently the flux of calcium. However, the calcium controls the expression of the calmoduline gene. The plant can thus, when there is a stimulus, modify its « typical » growth pattern. So the basic principle of this monumental sculpture is that to the extent that they could be picked up and transported, these signals could be enlarged, translated into colors and shapes, and show the plant’s « decisions » suggesting a level of fundamental biological activity.

Maurizio Bolognini works with generative machines to address conceptual and social concerns. Mark Napier is a pioneer in data mapping, creating works based on the streams of zeros and ones in ethernet traffic, as part of the "Carnivore" project. Martin Wattenberg pushed this theme further, transforming "data sets" as diverse as musical scores (in "Shape of Song", 2001) and Wikipedia edits (History Flow, 2003, with Fernanda Viegas) into dramatic visual compositions. The Canadian artist San Base developed a "Dynamic Painting" algorithm in 2002. Using computer algorithms as "brush strokes," Base creates sophisticated imagery that evolves over time to produce a fluid, never-repeating artwork.

Software art

For some artists, graphic user interfaces and computer code have become an independent art form in themselves. Adrian Ward created Auto-Illustrator as a commentary on software and generative methods applied to art and design.

Architecture

In 1987 Celestino Soddu created the artificial DNA of Italian Medieval towns able to generate endless 3D models of cities identifiable as belonging to the idea.

Literature

Writers such as Tristan Tzara, Brion Gysin, and William Burroughs used the cut-up technique to introduce randomization to literature as a generative system. Jackson Mac Low produced computer-assisted poetry and used algorithms to generate texts; Philip M. Parker has written software to automatically generate entire books. Jason Nelson used generative methods with Speech-to-Text software to create a series of digital poems from movies, television and other audio sources.

Live coding

Generative systems may be modified while they operate, for example by using interactive programming environments such as Max/MSP, vvvv, Fluxus, Isadora, Quartz Composer and openFrameworks. This is a standard approach to programming by artists, but may also be used to create live music and/or video by manipulating generative systems on stage, a performance practice that has become known as live coding. As with many examples of software art, because live coding emphasises human authorship rather than autonomy, it may be considered in opposition to generative art.

Theories

Philip Galanter

In the most widely cited theory of generative art, in 2003 Philip Galanter describes generative art systems in the context of complexity theory. In particular the notion of Murray Gell-Mann and Seth Lloyd's effective complexity is cited. In this view both highly ordered and highly disordered generative art can be viewed as simple. Highly ordered generative art minimizes entropy and allows maximal data compression, and highly disordered generative art maximizes entropy and disallows significant data compression. Maximally complex generative art blends order and disorder in a manner similar to biological life, and indeed biologically inspired methods are most frequently used to create complex generative art. This view is at odds with the earlier information theory influenced views of Max Bense and Abraham Moles where complexity in art increases with disorder.

Galanter notes further that given the use of visual symmetry, pattern, and repetition by the most ancient known cultures generative art is as old as art itself. He also addresses the mistaken equivalence by some that rule-based art is synonymous with generative art. For example, some art is based on constraint rules that disallow the use of certain colors or shapes. Such art is not generative because constraint rules are not constructive, i.e. by themselves they don't assert what is to be done, only what cannot be done.

Margaret Boden and Ernest Edmonds

In their 2009 article, Margaret Boden and Ernest Edmonds agree that generative art need not be restricted to that done using computers, and that some rule-based art is not generative. They develop a technical vocabulary that includes Ele-art (electronic art), C-art (computer art), D-art (digital art), CA-art (computer assisted art), G-art (generative art), CG-art (computer based generative art), Evo-art (evolutionary based art), R-art (robotic art), I-art (interactive art), CI-art (computer based interactive art), and VR-art (virtual reality art).

Questions

The discourse around generative art can be characterized by the theoretical questions which motivate its development. McCormack et al. propose the following questions, shown with paraphrased summaries, as the most important:
  1. Can a machine originate anything? Related to machine intelligence - can a machine generate something new, meaningful, surprising and of value: a poem, an artwork, a useful idea, a solution to a long-standing problem?
  2. What is it like to be a computer that makes art? If a computer could originate art, what would it be like from the computer's perspective?
  3. Can human aesthetics be formalized?
  4. What new kinds of art does the computer enable? Many generative artworks do not involve digital computers, but what does generative computer art bring that is new?
  5. In what sense is generative art representational, and what is it representing?
  6. What is the role of randomness in generative art? For example, what does the use of randomness say about the place of intentionality in the making of art?
  7. What can computational generative art tell us about creativity? How could generative art give rise to artifacts and ideas that are new, surprising and valuable?
  8. What characterizes good generative art? How can we form a more critical understanding of generative art?
  9. What can we learn about art from generative art? For example, can the art world be considered a complex generative system involving many processes outside the direct control of artists, who are agents of production within a stratified global art market.
  10. What future developments would force us to rethink our answers?
Another question is of postmodernism—are generative art systems the ultimate expression of the postmodern condition, or do they point to a new synthesis based on a complexity-inspired world-view?

Computational creativity

From Wikipedia, the free encyclopedia

Computational creativity (also known as artificial creativity, mechanical creativity, creative computing or creative computation) is a multidisciplinary endeavor that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.

The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends:
  • To construct a program or computer capable of human-level creativity.
  • To better understand human creativity and to formulate an algorithmic perspective on creative behavior in humans.
  • To design programs that can enhance human creativity without necessarily being creative themselves.
The field of computational creativity concerns itself with theoretical and practical issues in the study of creativity. Theoretical work on the nature and proper definition of creativity is performed in parallel with practical work on the implementation of systems that exhibit creativity, with one strand of work informing the other.

Theoretical issues

As measured by the amount of activity in the field (e.g., publications, conferences and workshops), computational creativity is a growing area of research. But the field is still hampered by a number of fundamental problems. Creativity is very difficult, perhaps even impossible, to define in objective terms. Is it a state of mind, a talent or ability, or a process? Creativity takes many forms in human activity, some eminent (sometimes referred to as "Creativity" with a capital C) and some mundane

These are problems that complicate the study of creativity in general, but certain problems attach themselves specifically to computational creativity:
  • Can creativity be hard-wired? In existing systems to which creativity is attributed, is the creativity that of the system or that of the system's programmer or designer?
  • How do we evaluate computational creativity? What counts as creativity in a computational system? Are natural language generation systems creative? Are machine translation systems creative? What distinguishes research in computational creativity from research in artificial intelligence generally?
  • If eminent creativity is about rule-breaking or the disavowal of convention, how is it possible for an algorithmic system to be creative? In essence, this is a variant of Ada Lovelace's objection to machine intelligence, as recapitulated by modern theorists such as Teresa Amabile. If a machine can do only what it was programmed to do, how can its behavior ever be called creative?
Indeed, not all computer theorists would agree with the premise that computers can only do what they are programmed to do—a key point in favor of computational creativity.

Defining creativity in computational terms

Because no single perspective or definition seems to offer a complete picture of creativity, the AI researchers Newell, Shaw and Simon developed the combination of novelty and usefulness into the cornerstone of a multi-pronged view of creativity, one that uses the following four criteria to categorize a given answer or solution as creative:
  1. The answer is novel and useful (either for the individual or for society)
  2. The answer demands that we reject ideas we had previously accepted
  3. The answer results from intense motivation and persistence
  4. The answer comes from clarifying a problem that was originally vague
Whereas the above reflects a "top-down" approach to computational creativity, an alternative thread has developed among "bottom-up" computational psychologists involved in artificial neural network research. During the late 1980s and early 1990s, for example, such generative neural systems were driven by genetic algorithms. Experiments involving recurrent nets were successful in hybridizing simple musical melodies and predicting listener expectations. 

Concurrent with such research, a number of computational psychologists took the perspective, popularized by Stephen Wolfram, that system behaviors perceived as complex, including the mind's creative output, could arise from what would be considered simple algorithms. As neuro-philosophical thinking matured, it also became evident that language actually presented an obstacle to producing a scientific model of cognition, creative or not, since it carried with it so many unscientific aggrandizements that were more uplifting than accurate. Thus questions naturally arose as to how "rich," "complex," and "wonderful" creative cognition actually was.

Artificial neural networks

Before 1989, artificial neural networks have been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner. In 1992, Todd extended this work, using the so-called distal teacher approach that had been developed by Paul Munro, Paul Werbos, D. Nguyen, Bernard Widrow, Michael I. Jordan, David Rumelhart. In the new approach there are two neural networks, one of which is supplying training patterns to another. In later efforts by Todd, a composer would select a set of melodies that define the melody space, position them on a 2-d plane with a mouse-based graphic interface, and train a connectionist network to produce those melodies, and listen to the new "interpolated" melodies that the network generates corresponding to intermediate points in the 2-d plane.

More recently a neurodynamical model of semantic networks has been developed to study how the connectivity structure of these networks relates to the richness of the semantic constructs, or ideas, they can generate. It was demonstrated that semantic neural networks that have richer semantic dynamics than those with other connectivity structures may provide insight into the important issue of how the physical structure of the brain determines one of the most profound features of the human mind – its capacity for creative thought.

Key concepts from the literature

Some high-level and philosophical themes recur throughout the field of computational creativity.

Important categories of creativity

Margaret Boden refers to creativity that is novel merely to the agent that produces it as "P-creativity" (or "psychological creativity"), and refers to creativity that is recognized as novel by society at large as "H-creativity" (or "historical creativity"). Stephen Thaler has suggested a new category he calls "V-" or "Visceral creativity" wherein significance is invented to raw sensory inputs to a Creativity Machine architecture, with the "gateway" nets perturbed to produce alternative interpretations, and downstream nets shifting such interpretations to fit the overarching context. An important variety of such V-creativity is consciousness itself, wherein meaning is reflexively invented to activation turnover within the brain.

Exploratory and transformational creativity

Boden also distinguishes between the creativity that arises from an exploration within an established conceptual space, and the creativity that arises from a deliberate transformation or transcendence of this space. She labels the former as exploratory creativity and the latter as transformational creativity, seeing the latter as a form of creativity far more radical, challenging, and rarer than the former. Following the criteria from Newell and Simon elaborated above, we can see that both forms of creativity should produce results that are appreciably novel and useful (criterion 1), but exploratory creativity is more likely to arise from a thorough and persistent search of a well-understood space (criterion 3) -- while transformational creativity should involve the rejection of some of the constraints that define this space (criterion 2) or some of the assumptions that define the problem itself (criterion 4). Boden's insights have guided work in computational creativity at a very general level, providing more an inspirational touchstone for development work than a technical framework of algorithmic substance. However, Boden's insights are more recently also the subject of formalization, most notably in the work by Geraint Wiggins.

Generation and evaluation

The criterion that creative products should be novel and useful means that creative computational systems are typically structured into two phases, generation and evaluation. In the first phase, novel (to the system itself, thus P-Creative) constructs are generated; unoriginal constructs that are already known to the system are filtered at this stage. This body of potentially creative constructs are then evaluated, to determine which are meaningful and useful and which are not. This two-phase structure conforms to the Geneplore model of Finke, Ward and Smith, which is a psychological model of creative generation based on empirical observation of human creativity.

Combinatorial creativity

A great deal, perhaps all, of human creativity can be understood as a novel combination of pre-existing ideas or objects. Common strategies for combinatorial creativity include:
  • Placing a familiar object in an unfamiliar setting (e.g., Marcel Duchamp's Fountain) or an unfamiliar object in a familiar setting (e.g., a fish-out-of-water story such as The Beverly Hillbillies)
  • Blending two superficially different objects or genres (e.g., a sci-fi story set in the Wild West, with robot cowboys, as in Westworld, or the reverse, as in Firefly; Japanese haiku poems, etc.)
  • Comparing a familiar object to a superficially unrelated and semantically distant concept (e.g., "Makeup is the Western burka"; "A zoo is a gallery with living exhibits")
  • Adding a new and unexpected feature to an existing concept (e.g., adding a scalpel to a Swiss Army knife; adding a camera to a mobile phone)
  • Compressing two incongruous scenarios into the same narrative to get a joke (e.g., the Emo Philips joke "Women are always using me to advance their careers. Damned anthropologists!")
  • Using an iconic image from one domain in a domain for an unrelated or incongruous idea or product (e.g., using the Marlboro Man image to sell cars, or to advertise the dangers of smoking-related impotence).
The combinatorial perspective allows us to model creativity as a search process through the space of possible combinations. The combinations can arise from composition or concatenation of different representations, or through a rule-based or stochastic transformation of initial and intermediate representations. Genetic algorithms and neural networks can be used to generate blended or crossover representations that capture a combination of different inputs.

Conceptual blending

Mark Turner and Gilles Fauconnier propose a model called Conceptual Integration Networks that elaborates upon Arthur Koestler's ideas about creativity as well as more recent work by Lakoff and Johnson, by synthesizing ideas from Cognitive Linguistic research into mental spaces and conceptual metaphors. Their basic model defines an integration network as four connected spaces:
  • A first input space (contains one conceptual structure or mental space)
  • A second input space (to be blended with the first input)
  • A generic space of stock conventions and image-schemas that allow the input spaces to be understood from an integrated perspective
  • A blend space in which a selected projection of elements from both input spaces are combined; inferences arising from this combination also reside here, sometimes leading to emergent structures that conflict with the inputs.
Fauconnier and Turner describe a collection of optimality principles that are claimed to guide the construction of a well-formed integration network. In essence, they see blending as a compression mechanism in which two or more input structures are compressed into a single blend structure. This compression operates on the level of conceptual relations. For example, a series of similarity relations between the input spaces can be compressed into a single identity relationship in the blend. 

Some computational success has been achieved with the blending model by extending pre-existing computational models of analogical mapping that are compatible by virtue of their emphasis on connected semantic structures. More recently, Francisco Câmara Pereira presented an implementation of blending theory that employs ideas both from GOFAI and genetic algorithms to realize some aspects of blending theory in a practical form; his example domains range from the linguistic to the visual, and the latter most notably includes the creation of mythical monsters by combining 3-D graphical models.

Linguistic creativity

Language provides continuous opportunity for creativity, evident in the generation of novel sentences, phrasings, puns, neologisms, rhymes, allusions, sarcasm, irony, similes, metaphors, analogies, witticisms, and jokes.  Native speakers of morphologically rich languages frequently create new word-forms that are easily understood, and some have found their way to the dictionary. The area of natural language generation has been well studied, but these creative aspects of everyday language have yet to be incorporated with any robustness or scale.

Hypothesis of creative patterns

In the seminal work of applied linguist Ronald Carter, he hypothesized two main creativity types involving words and word patterns: pattern-reforming creativity, and pattern-forming creativity.  Pattern-reforming creativity refers to creativity by the breaking of rules, reforming and reshaping patterns of language often through individual innovation, while pattern-forming creativity refers to creativity via conformity to language rules rather than breaking them, creating convergence, symmetry and greater mutuality between interlocutors through their interactions in the form of repetitions. 

Story generation

Substantial work has been conducted in this area of linguistic creation since the 1970s, with the development of James Meehan's TALE-SPIN system. TALE-SPIN viewed stories as narrative descriptions of a problem-solving effort, and created stories by first establishing a goal for the story's characters so that their search for a solution could be tracked and recorded. The MINSTREL system represents a complex elaboration of this basis approach, distinguishing a range of character-level goals in the story from a range of author-level goals for the story. Systems like Bringsjord's BRUTUS elaborate these ideas further to create stories with complex inter-personal themes like betrayal. Nonetheless, MINSTREL explicitly models the creative process with a set of Transform Recall Adapt Methods (TRAMs) to create novel scenes from old. The MEXICA model of Rafael Pérez y Pérez and Mike Sharples is more explicitly interested in the creative process of storytelling, and implements a version of the engagement-reflection cognitive model of creative writing.

The company Narrative Science makes computer generated news and reports commercially available, including summarizing team sporting events based on statistical data from the game. It also creates financial reports and real estate analyses.

Metaphor and simile

Example of a metaphor: "She was an ape."
 
Example of a simile: "Felt like a tiger-fur blanket."

The computational study of these phenomena has mainly focused on interpretation as a knowledge-based process. Computationalists such as Yorick Wilks, James Martin, Dan Fass, John Barnden, and Mark Lee have developed knowledge-based approaches to the processing of metaphors, either at a linguistic level or a logical level. Tony Veale and Yanfen Hao have developed a system, called Sardonicus, that acquires a comprehensive database of explicit similes from the web; these similes are then tagged as bona-fide (e.g., "as hard as steel") or ironic (e.g., "as hairy as a bowling ball", "as pleasant as a root canal"); similes of either type can be retrieved on demand for any given adjective. They use these similes as the basis of an on-line metaphor generation system called Aristotle that can suggest lexical metaphors for a given descriptive goal (e.g., to describe a supermodel as skinny, the source terms "pencil", "whip", "whippet", "rope", "stick-insect" and "snake" are suggested).

Analogy

The process of analogical reasoning has been studied from both a mapping and a retrieval perspective, the latter being key to the generation of novel analogies. The dominant school of research, as advanced by Dedre Gentner, views analogy as a structure-preserving process; this view has been implemented in the structure mapping engine or SME, the MAC/FAC retrieval engine (Many Are Called, Few Are Chosen), ACME (Analogical Constraint Mapping Engine) and ARCS (Analogical Retrieval Constraint System). Other mapping-based approaches include Sapper, which situates the mapping process in a semantic-network model of memory. Analogy is a very active sub-area of creative computation and creative cognition; active figures in this sub-area include Douglas Hofstadter, Paul Thagard, and Keith Holyoak. Also worthy of note here is Peter Turney and Michael Littman's machine learning approach to the solving of SAT-style analogy problems; their approach achieves a score that compares well with average scores achieved by humans on these tests.

Joke generation

Humor is an especially knowledge-hungry process, and the most successful joke-generation systems to date have focussed on pun-generation, as exemplified by the work of Kim Binsted and Graeme Ritchie. This work includes the JAPE system, which can generate a wide range of puns that are consistently evaluated as novel and humorous by young children. An improved version of JAPE has been developed in the guise of the STANDUP system, which has been experimentally deployed as a means of enhancing linguistic interaction with children with communication disabilities. Some limited progress has been made in generating humor that involves other aspects of natural language, such as the deliberate misunderstanding of pronominal reference (in the work of Hans Wim Tinholt and Anton Nijholt), as well as in the generation of humorous acronyms in the HAHAcronym system of Oliviero Stock and Carlo Strapparava.

Neologism

The blending of multiple word forms is a dominant force for new word creation in language; these new words are commonly called "blends" or "portmanteau words" (after Lewis Carroll). Tony Veale has developed a system called ZeitGeist that harvests neological headwords from Wikipedia and interprets them relative to their local context in Wikipedia and relative to specific word senses in WordNet. ZeitGeist has been extended to generate neologisms of its own; the approach combines elements from an inventory of word parts that are harvested from WordNet, and simultaneously determines likely glosses for these new words (e.g., "food traveler" for "gastronaut" and "time traveler" for "chrononaut"). It then uses Web search to determine which glosses are meaningful and which neologisms have not been used before; this search identifies the subset of generated words that are both novel ("H-creative") and useful. Neurolinguistic inspirations have been used to analyze the process of novel word creation in the brain, understand neurocognitive processes responsible for intuition, insight, imagination and creativity and to create a server that invents novel names for products, based on their description. Further, the system Nehovah blends two source words into a neologism that blends the meanings of the two source words. Nehovah searches WordNet for synonyms and TheTopTens.com for pop culture hyponyms. The synonyms and hyponyms are blended together to create a set of candidate neologisms. The neologisms are then scored based on their word structure, how unique the word is, how apparent the concepts are conveyed, and if the neologism has a pop culture reference. Nehovah loosely follows conceptual blending. 

A corpus linguistic approach to the search and extraction of neologism have also shown to be possible. Using Corpus of Contemporary American English as a reference corpus, Locky Law has performed an extraction of neologism, portmanteaus and slang words using the hapax legomena which appeared in the scripts of American TV drama House M.D. 

In terms of linguistic research in neologism, Stefan Th. Gries has performed a quantitative analysis of blend structure in English and found that "the degree of recognizability of the source words and that the similarity of source words to the blend plays a vital role in blend formation." The results were validated through a comparison of intentional blends to speech-error blends.

Poetry

Like jokes, poems involve a complex interaction of different constraints, and no general-purpose poem generator adequately combines the meaning, phrasing, structure and rhyme aspects of poetry. Nonetheless, Pablo Gervás has developed a noteworthy system called ASPERA that employs a case-based reasoning (CBR) approach to generating poetic formulations of a given input text via a composition of poetic fragments that are retrieved from a case-base of existing poems. Each poem fragment in the ASPERA case-base is annotated with a prose string that expresses the meaning of the fragment, and this prose string is used as the retrieval key for each fragment. Metrical rules are then used to combine these fragments into a well-formed poetic structure. Racter is an example of such a software project.

Musical creativity

Computational creativity in the music domain has focused both on the generation of musical scores for use by human musicians, and on the generation of music for performance by computers. The domain of generation has included classical music (with software that generates music in the style of Mozart and Bach) and jazz. Most notably, David Cope has written a software system called "Experiments in Musical Intelligence" (or "EMI") that is capable of analyzing and generalizing from existing music by a human composer to generate novel musical compositions in the same style. EMI's output is convincing enough to persuade human listeners that its music is human-generated to a high level of competence.

In the field of contemporary classical music, Iamus is the first computer that composes from scratch, and produces final scores that professional interpreters can play. The London Symphony Orchestra played a piece for full orchestra, included in Iamus' debut CD, which New Scientist described as "The first major work composed by a computer and performed by a full orchestra". Melomics, the technology behind Iamus, is able to generate pieces in different styles of music with a similar level of quality. 

Creativity research in jazz has focused on the process of improvisation and the cognitive demands that this places on a musical agent: reasoning about time, remembering and conceptualizing what has already been played, and planning ahead for what might be played next. The robot Shimon, developed by Gil Weinberg of Georgia Tech, has demonstrated jazz improvisation. Virtual improvisation software based on machine learning models of musical style include OMax, SoMax and PyOracle, are used to create improvisations in real-time by re-injecting variable length sequences learned on the fly from live performer.

In 1994, a Creativity Machine architecture (see above) was able to generate 11,000 musical hooks by training a synaptically perturbed neural net on 100 melodies that had appeared on the top ten list over the last 30 years. In 1996, a self-bootstrapping Creativity Machine observed audience facial expressions through an advanced machine vision system and perfected its musical talents to generate an album entitled "Song of the Neurons"

In the field of musical composition, the patented works by René-Louis Baron allowed to make a robot that can create and play a multitude of orchestrated melodies so-called "coherent" in any musical style. All outdoor physical parameter associated with one or more specific musical parameters, can influence and develop each of these songs (in real time while listening to the song). The patented invention Medal-Composer raises problems of copyright.

Visual and artistic creativity

Computational creativity in the generation of visual art has had some notable successes in the creation of both abstract art and representational art. The most famous program in this domain is Harold Cohen's AARON, which has been continuously developed and augmented since 1973. Though formulaic, Aaron exhibits a range of outputs, generating black-and-white drawings or color paintings that incorporate human figures (such as dancers), potted plants, rocks, and other elements of background imagery. These images are of a sufficiently high quality to be displayed in reputable galleries. 

Other software artists of note include the NEvAr system (for "Neuro-Evolutionary Art") of Penousal Machado. NEvAr uses a genetic algorithm to derive a mathematical function that is then used to generate a coloured three-dimensional surface. A human user is allowed to select the best pictures after each phase of the genetic algorithm, and these preferences are used to guide successive phases, thereby pushing NEvAr's search into pockets of the search space that are considered most appealing to the user. 

The Painting Fool, developed by Simon Colton originated as a system for overpainting digital images of a given scene in a choice of different painting styles, colour palettes and brush types. Given its dependence on an input source image to work with, the earliest iterations of the Painting Fool raised questions about the extent of, or lack of, creativity in a computational art system. Nonetheless, in more recent work, The Painting Fool has been extended to create novel images, much as AARON does, from its own limited imagination. Images in this vein include cityscapes and forests, which are generated by a process of constraint satisfaction from some basic scenarios provided by the user (e.g., these scenarios allow the system to infer that objects closer to the viewing plane should be larger and more color-saturated, while those further away should be less saturated and appear smaller). Artistically, the images now created by the Painting Fool appear on a par with those created by Aaron, though the extensible mechanisms employed by the former (constraint satisfaction, etc.) may well allow it to develop into a more elaborate and sophisticated painter.

The artist Krasimira Dimtchevska and the software developer Svillen Ranev have created a computational system combining a rule-based generator of English sentences and a visual composition builder that converts sentences generated by the system into abstract art. The software generates automatically indefinite number of different images using different color, shape and size palettes. The software also allows the user to select the subject of the generated sentences or/and the one or more of the palettes used by the visual composition builder. 

An emerging area of computational creativity is that of video games. ANGELINA is a system for creatively developing video games in Java by Michael Cook. One important aspect is Mechanic Miner, a system which can generate short segments of code which act as simple game mechanics. ANGELINA can evaluate these mechanics for usefulness by playing simple unsolvable game levels and testing to see if the new mechanic makes the level solvable. Sometimes Mechanic Miner discovers bugs in the code and exploits these to make new mechanics for the player to solve problems with.

In July 2015 Google released DeepDream – an open source computer vision program, created to detect faces and other patterns in images with the aim of automatically classifying images, which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dreamlike psychedelic appearance in the deliberately over-processed images.

In August 2015 researchers from Tübingen, Germany created a convolutional neural network that uses neural representations to separate and recombine content and style of arbitrary images which is able to turn images into stylistic imitations of works of art by artists such as a Picasso or Van Gogh in about an hour. Their algorithm is put into use in the website DeepArt that allows users to create unique artistic images by their algorithm.

In early 2016, a global team of researchers explained how a new computational creativity approach known as the Digital Synaptic Neural Substrate (DSNS) could be used to generate original chess puzzles that were not derived from endgame databases. The DSNS is able to combine features of different objects (e.g. chess problems, paintings, music) using stochastic methods in order to derive new feature specifications which can be used to generate objects in any of the original domains. The generated chess puzzles have also been featured on YouTube.

Creativity in problem solving

Creativity is also useful in allowing for unusual solutions in problem solving. In psychology and cognitive science, this research area is called creative problem solving. The Explicit-Implicit Interaction (EII) theory of creativity has recently been implemented using a CLARION-based computational model that allows for the simulation of incubation and insight in problem solving. The emphasis of this computational creativity project is not on performance per se (as in artificial intelligence projects) but rather on the explanation of the psychological processes leading to human creativity and the reproduction of data collected in psychology experiments. So far, this project has been successful in providing an explanation for incubation effects in simple memory experiments, insight in problem solving, and reproducing the overshadowing effect in problem solving.

Debate about "general" theories of creativity

Some researchers feel that creativity is a complex phenomenon whose study is further complicated by the plasticity of the language we use to describe it. We can describe not just the agent of creativity as "creative" but also the product and the method. Consequently, it could be claimed that it is unrealistic to speak of a general theory of creativity. Nonetheless, some generative principles are more general than others, leading some advocates to claim that certain computational approaches are "general theories". Stephen Thaler, for instance, proposes that certain modalities of neural networks are generative enough, and general enough, to manifest a high degree of creative capabilities. Likewise, the Formal Theory of Creativity is based on a simple computational principle published by Jürgen Schmidhuber in 1991. The theory postulates that creativity and curiosity and selective attention in general are by-products of a simple algorithmic principle for measuring and optimizing learning progress.

Criticism of Computational Creativity

Traditional computers, as mainly used in the computational creativity application, do not support creativity, as they fundamentally transform a set of discrete, limited domain of input parameters into a set of discrete, limited domain of output parameters using a limited set of computational functions. As such, a computer cannot be creative, as everything in the output must have been already present in the input data or the algorithms. For some related discussions and references to related work are captured in some recent work on philosophical foundations of simulation.

Mathematically, the same set of arguments against creativity has been made by Chaitin. Similar observations come from a Model Theory perspective. All this criticism emphasizes that computational creativity is useful and may look like creativity, but it is not real creativity, as nothing new is created, just transformed in well defined algorithms.

Events

The International Conference on Computational Creativity (ICCC) occurs annually, organized by The Association for Computational Creativity. Events in the series include:
  • ICCC 2018, Salamanca, Spain
  • ICCC 2017, Atlanta, Georgia, USA
  • ICCC 2016, Paris, France
  • ICCC 2015, Park City, Utah, USA. Keynote: Emily Short
  • ICCC 2014, Ljubljana, Slovenia. Keynote: Oliver Deussen
  • ICCC 2013, Sydney, Australia. Keynote: Arne Dietrich
  • ICCC 2012, Dublin, Ireland. Keynote: Steven Smith
  • ICCC 2011, Mexico City, Mexico. Keynote: George E Lewis
  • ICCC 2010, Lisbon, Portugal. Keynote/Inivited Talks: Nancy J Nersessian and Mary Lou Maher
Previously, the community of computational creativity has held a dedicated workshop, the International Joint Workshop on Computational Creativity, every year since 1999. Previous events in this series include:
  • IJWCC 2003, Acapulco, Mexico, as part of IJCAI'2003
  • IJWCC 2004, Madrid, Spain, as part of ECCBR'2004
  • IJWCC 2005, Edinburgh, UK, as part of IJCAI'2005
  • IJWCC 2006, Riva del Garda, Italy, as part of ECAI'2006
  • IJWCC 2007, London, UK, a stand-alone event
  • IJWCC 2008, Madrid, Spain, a stand-alone event
The 1st Conference on Computer Simulation of Musical Creativity will be held
  • CCSMC 2016, 17–19 June, University of Huddersfield, UK. Keynotes: Geraint Wiggins and Graeme Bailey.

Publications and forums

Design Computing and Cognition is one conference that addresses computational creativity. The ACM Creativity and Cognition conference is another forum for issues related to computational creativity. Journées d'Informatique Musicale 2016 keynote by Shlomo Dubnov was on Information Theoretic Creativity.

A number of recent books provide either a good introduction or a good overview of the field of Computational Creativity. These include:
  • Pereira, F. C. (2007). "Creativity and Artificial Intelligence: A Conceptual Blending Approach". Applications of Cognitive Linguistics series, Mouton de Gruyter.
  • Veale, T. (2012). "Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity". Bloomsbury Academic, London.
  • McCormack, J. and d'Inverno, M. (eds.) (2012). "Computers and Creativity". Springer, Berlin.
  • Veale, T., Feyaerts, K. and Forceville, C. (2013, forthcoming). "Creativity and the Agile Mind: A Multidisciplinary study of a Multifaceted phenomenon". Mouton de Gruyter.
In addition to the proceedings of conferences and workshops, the computational creativity community has thus far produced these special journal issues dedicated to the topic:
  • New Generation Computing, volume 24, issue 3, 2006
  • Journal of Knowledge-Based Systems, volume 19, issue 7, November 2006
  • AI Magazine, volume 30, number 3, Fall 2009
  • Minds and Machines, volume 20, number 4, November 2010
  • Cognitive Computation, volume 4, issue 3, September 2012
  • AIEDAM, volume 27, number 4, Fall 2013
  • Computers in Entertainment, two special issues on Music Meta-Creation (MuMe), Fall 2016 (forthcoming)
In addition to these, a new journal has started which focuses on computational creativity within the field of music.
  • JCMS 2016, Journal of Creative Music Systems

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...