Search This Blog

Sunday, December 24, 2023

Generative grammar

From Wikipedia, the free encyclopedia
A generative parse tree: the sentence is divided into a noun phrase (subject), and a verb phrase which includes the object. This is in contrast to structural and functional grammar which consider the subject and object as equal constituents.

Generative grammar, or generativism /ˈɛnərətɪvɪzəm/, is a linguistic theory that regards linguistics as the study of a hypothesised innate grammatical structure. It is a biological or biologistic modification of earlier structuralist theories of linguistics, deriving from logical syntax and glossematics. Generative grammar considers grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. It is a system of explicit rules that may apply repeatedly to generate an indefinite number of sentences which can be as long as one wants them to be. The difference from structural and functional models is that the object is base-generated within the verb phrase in generative grammar. This purportedly cognitive structure is thought of as being a part of a universal grammar, a syntactic structure which is caused by a genetic mutation in humans.

Generativists have created numerous theories to make the NP VP (NP) analysis work in natural language description. That is, the subject and the verb phrase appearing as independent constituents, and the object placed within the verb phrase. A main point of interest remains in how to appropriately analyse Wh-movement and other cases where the subject appears to separate the verb from the object. Although claimed by generativists as a cognitively real structure, neuroscience has found no evidence for it. In other words, generative grammar encompasses proposed models of linguistic cognition; but there is still no specific indication that these are quite correct. Recent arguments have been made that the success of large language models undermine key claims of generative syntax because they are based on markedly different assumptions, including gradient probability and memorized constructions, and out-perform generative theories both in syntactic structure and in integration with cognition and neuroscience.

Frameworks

There are a number of different approaches to generative grammar. Common to all is the effort to come up with a set of rules or principles that formally defines each and every one of the members of the set of well-formed expressions of a natural language. The term generative grammar has been associated with at least the following schools of linguistics:

Historical development of models of transformational grammar

Leonard Bloomfield, an influential linguist in the American Structuralist tradition, saw the ancient Indian grammarian Pāṇini as an antecedent of structuralism. However, in Aspects of the Theory of Syntax, Chomsky writes that "even Panini's grammar can be interpreted as a fragment of such a 'generative grammar'", a view that he reiterated in an award acceptance speech delivered in India in 2001, where he claimed that "the first 'generative grammar' in something like the modern sense is Panini's grammar of Sanskrit".

Military funding to generativist research was influential to its early success in the 1960s.

Generative grammar has been under development since the mid 1950s, and has undergone many changes in the types of rules and representations that are used to predict grammaticality. In tracing the historical development of ideas within generative grammar, it is useful to refer to the various stages in the development of the theory:

Standard theory (1956–1965)

The so-called standard theory corresponds to the original model of generative grammar laid out by Chomsky in 1965.

A core aspect of standard theory is the distinction between two different representations of a sentence, called deep structure and surface structure. The two representations are linked to each other by transformational grammar.

Extended standard theory (1965–1973)

The so-called extended standard theory was formulated in the late 1960s and early 1970s. Features are:

  • syntactic constraints
  • generalized phrase structures (X-bar theory)

Revised extended standard theory (1973–1976)

The so-called revised extended standard theory was formulated between 1973 and 1976. It contains

Relational grammar (ca. 1975–1990)

An alternative model of syntax based on the idea that notions like subject, direct object, and indirect object play a primary role in grammar.

Government and binding/principles and parameters theory (1981–1990)

Chomsky's Lectures on Government and Binding (1981) and Barriers (1986).

Minimalist program (1990–present)

The minimalist program is a line of inquiry that hypothesizes that the human language faculty is optimal, containing only what is necessary to meet humans' physical and communicative needs, and seeks to identify the necessary properties of such a system. It was proposed by Chomsky in 1993.

Context-free grammars

Generative grammars can be described and compared with the aid of the Chomsky hierarchy (proposed by Chomsky in the 1950s). This sets out a series of types of formal grammars with increasing expressive power. Among the simplest types are the regular grammars (type 3); Chomsky argues that these are not adequate as models for human language, because of the allowance of the center-embedding of strings within strings, in all natural human languages.

At a higher level of complexity are the context-free grammars (type 2). The derivation of a sentence by such a grammar can be depicted as a derivation tree. Linguists working within generative grammar often view such trees as a primary object of study. According to this view, a sentence is not merely a string of words. Instead, adjacent words are combined into constituents, which can then be further combined with other words or constituents to create a hierarchical tree-structure.

The derivation of a simple tree-structure for the sentence "the dog ate the bone" proceeds as follows. The determiner the and noun dog combine to create the noun phrase the dog. A second noun phrase the bone is created with determiner the and noun bone. The verb ate combines with the second noun phrase, the bone, to create the verb phrase ate the bone. Finally, the first noun phrase, the dog, combines with the verb phrase, ate the bone, to complete the sentence: the dog ate the bone. The following tree diagram illustrates this derivation and the resulting structure:

Such a tree diagram is also called a phrase marker. They can be represented more conveniently in text form, (though the result is less easy to read); in this format the above sentence would be rendered as:
[S [NP [D The ] [N dog ] ] [VP [V ate ] [NP [D the ] [N bone ] ] ] ]

Chomsky has argued that phrase structure grammars are also inadequate for describing natural languages, and formulated the more complex system of transformational grammar.

Evidentiality

Noam Chomsky, the main proponent of generative grammar, believed he had found linguistic evidence that syntactic structures are not learned but "acquired" by the child from universal grammar. This led to the establishment of the poverty of the stimulus argument in the 1980s. However, critics claimed Chomsky's linguistic analysis had been inadequate. Linguistic studies had been made to prove that children have innate knowledge of grammar that they could not have learned. For example, it was shown that a child acquiring English knows how to differentiate between the place of the verb in main clauses from the place of the verb in relative clauses. In the experiment, children were asked to turn a declarative sentence with a relative clause into an interrogative sentence. Against the expectations of the researchers, the children did not move the verb in the relative clause to its sentence initial position, but to the main clause initial position, as is grammatical. Critics however pointed out that this was not evidence for the poverty of the stimulus because the underlying structures that children were proved to be able to manipulate were actually highly common in children's literature and everyday language. This led to a heated debate which resulted in the rejection of generative grammar from mainstream psycholinguistics and applied linguistics around 2000. In the aftermath, some professionals argued that decades of research had been wasted due to generative grammar, an approach which has failed to make a lasting impact on the field.

The sentence from the study which shows that it is not the verb in the relative clause, but the verb in the main clause that raises to the head C°.

There is no evidence that syntactic structures are innate. While some hopes were raised at the discovery of the FOXP2 gene, there is not enough support for the idea that it is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech.

Neuroscientific studies using ERPs have found no scientific evidence for the claim that human mind processes grammatical objects as if they were placed inside the verb phrase. Instead, brain research has shown that sentence processing is based on the interaction of semantic and syntactic processing. However, since generative grammar is not a theory of neurology, but a theory of psychology, it is completely normal in the field of neurology to find no concreteness of the verb phrase in the brain. In fact, these rules do not exist in our brains, but they do model the external behaviour of the mind. This is why GG claims to be a theory of psychology and is considered to be real cognitively.

Generativists also claim that language is placed inside its own mind module and that there is no interaction between first-language processing and other types of information processing, such as mathematics. This claim is not based on research or the general scientific understanding of how the brain works.

Chomsky has answered the criticism by emphasising that his theories are actually counter-evidential. He however believes it to be a case where the real value of the research is only understood later on, as it was with Galileo.

Music

Generative grammar has been used in music theory and analysis since the 1980s. The most well-known approaches were developed by Mark Steedman as well as Fred Lerdahl and Ray Jackendoff, who formalized and extended ideas from Schenkerian analysis. More recently, such early generative approaches to music were further developed and extended by various scholars. French composer Philippe Manoury applied the systematic of generative grammar to the field of contemporary classical music.

Algorithmic composition

From Wikipedia, the free encyclopedia

Algorithmic composition is the technique of using algorithms to create music.

Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries; the procedures used to plot voice-leading in Western counterpoint, for example, can often be reduced to algorithmic determinacy. The term can be used to describe music-generating techniques that run without ongoing human intervention, for example through the introduction of chance procedures. However through live coding and other interactive interfaces, a fully human-centric approach to algorithmic composition is possible.

Some algorithms or data that have no immediate musical relevance are used by composers as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials.

Models for algorithmic composition

Compositional algorithms are usually classified by the specific programming techniques they use. The results of the process can then be divided into 1) music composed by computer and 2) music composed with the aid of computer. Music may be considered composed by computer when the algorithm is able to make choices of its own during the creation process.

Another way to sort compositional algorithms is to examine the results of their compositional processes. Algorithms can either 1) provide notational information (sheet music or MIDI) for other instruments or 2) provide an independent way of sound synthesis (playing the composition by itself). There are also algorithms creating both notational data and sound synthesis.

One way to categorize compositional algorithms is by their structure and the way of processing data, as seen in this model of six partly overlapping types:

  • translational models
  • mathematical models
  • knowledge-based systems
  • grammars
  • optimization approaches
  • evolutionary methods
  • systems which learn
  • hybrid systems

Translational models

This is an approach to music synthesis that involves "translating" information from an existing non-musical medium into a new sound. The translation can be either rule-based or stochastic. For example, when translating a picture into sound, a JPEG image of a horizontal line may be interpreted in sound as a constant pitch, while an upwards-slanted line may be an ascending scale. Oftentimes, the software seeks to extract concepts or metaphors from the medium, (such as height or sentiment) and apply the extracted information to generate songs using the ways music theory typically represents those concepts. Another example is the translation of text into music, which can approach composition by extracting sentiment (positive or negative) from the text using machine learning methods like sentiment analysis and represents that sentiment in terms of chord quality such as minor (sad) or major (happy) chords in the musical output generated.

Mathematical models

Mathematical models are based on mathematical equations and random events. The most common way to create compositions through mathematics is stochastic processes. In stochastic models a piece of music is composed as a result of non-deterministic methods. The compositional process is only partially controlled by the composer by weighting the possibilities of random events. Prominent examples of stochastic algorithms are Markov chains and various uses of Gaussian distributions. Stochastic algorithms are often used together with other algorithms in various decision-making processes.

Music has also been composed through natural phenomena. These chaotic models create compositions from the harmonic and inharmonic phenomena of nature. For example, since the 1970s fractals have been studied also as models for algorithmic composition.

As an example of deterministic compositions through mathematical models, the On-Line Encyclopedia of Integer Sequences provides an option to play an integer sequence as 12-tone equal temperament music. (It is initially set to convert each integer to a note on an 88-key musical keyboard by computing the integer modulo 88, at a steady rhythm. Thus 123456, the natural numbers, equals half of a chromatic scale.) As another example, the all-interval series has been used for computer-aided composition. 

Knowledge-based systems

One way to create compositions is to isolate the aesthetic code of a certain musical genre and use this code to create new similar compositions. Knowledge-based systems are based on a pre-made set of arguments that can be used to compose new works of the same style or genre. Usually this is accomplished by a set of tests or rules requiring fulfillment for the composition to be complete.

Grammars

Music can also be examined as a language with a distinctive grammar set. Compositions are created by first constructing a musical grammar, which is then used to create comprehensible musical pieces. Grammars often include rules for macro-level composing, for instance harmonies and rhythm, rather than single notes.

Optimization approaches

When generating well defined styles, music can be seen as a combinatorial optimization problem, whereby the aim is to find the right combination of notes such that the objective function is minimized. This objective function typically contains rules of a particular style, but could be learned using machine learning methods such as Markov models. Researchers have generated music using a myriad of different optimization methods, including integer programming, variable neighbourhood search, and evolutionary methods as mentioned in the next subsection.

Evolutionary methods

Evolutionary methods of composing music are based on genetic algorithms. The composition is being built by the means of evolutionary process. Through mutation and natural selection, different solutions evolve towards a suitable musical piece. Iterative action of the algorithm cuts out bad solutions and creates new ones from those surviving the process. The results of the process are supervised by the critic, a vital part of the algorithm controlling the quality of created compositions.

Evo-Devo approach

Evolutionary methods, combined with developmental processes, constitute the evo-devo approach for generation and optimization of complex structures. These methods have also been applied to music composition, where the musical structure is obtained by an iterative process that transform a very simple composition (made of a few notes) into a complex fully-fledged piece (be it a score, or a MIDI file).

Systems that learn

Learning systems are programs that have no given knowledge of the genre of music they are working with. Instead, they collect the learning material by themselves from the example material supplied by the user or programmer. The material is then processed into a piece of music similar to the example material. This method of algorithmic composition is strongly linked to algorithmic modeling of style, machine improvisation, and such studies as cognitive science and the study of neural networks. Assayag and Dubnov proposed a variable length Markov model to learn motif and phrase continuations of different length. Marchini and Purwins presented a system that learns the structure of an audio recording of a rhythmical percussion fragment using unsupervised clustering and variable length Markov chains and that synthesizes musical variations from it.

Hybrid systems

Programs based on a single algorithmic model rarely succeed in creating aesthetically satisfying results. For that reason algorithms of different type are often used together to combine the strengths and diminish the weaknesses of these algorithms. Creating hybrid systems for music composition has opened up the field of algorithmic composition and created also many brand new ways to construct compositions algorithmically. The only major problem with hybrid systems is their growing complexity and the need of resources to combine and test these algorithms.

Another approach, which can be called computer-assisted composition, is to algorithmically create certain structures for finally "hand-made" compositions. As early as in the 1960s, Gottfried Michael Koenig developed computer programs Project 1 and Project 2 for aleatoric music, the output of which was sensibly structured "manually" by means of performance instructions. In the 2000s, Andranik Tangian developed a computer algorithm to determine the time event structures for rhythmic canons and rhythmic fugues, which were then worked out into harmonic compositions Eine kleine Mathmusik I and Eine kleine Mathmusik II; for scores and recordings see.

Algorithmic art

From Wikipedia, the free encyclopedia
"Octopod" by Mikael Hvidtfeldt Christensen. An example of algorithmic art produced with the software Structure Synth.

Algorithmic art or algorithm art is art, mostly visual art, in which the design is generated by an algorithm. Algorithmic artists are sometimes called algorists.

Overview

Simple Algorithmic Art, generated using random numbers

Algorithmic art, also known as computer-generated art, is a subset of generative art (generated by an autonomous system) and is related to systems art (influenced by systems theory). Fractal art is an example of algorithmic art.

For an image of reasonable size, even the simplest algorithms require too much calculation for manual execution to be practical, and they are thus executed on either a single computer or on a cluster of computers. The final output is typically displayed on a computer monitor, printed with a raster-type printer, or drawn using a plotter. Variability can be introduced by using pseudo-random numbers. There is no consensus as to whether the product of an algorithm that operates on an existing image (or on any input other than pseudo-random numbers) can still be considered computer-generated art, as opposed to computer-assisted art.

History

Islamic geometric patterns such as this girih tiling in the Darb-e Imam shrine in Isfahan, are precursors of algorithmic art.

Roman Verostko argues that Islamic geometric patterns are constructed using algorithms, as are Italian Renaissance paintings which make use of mathematical techniques, in particular linear perspective and proportion.

Paolo Uccello made innovative use of a geometric algorithm, incorporating linear perspective in paintings such as The Battle of San Romano (c. 1435–1460): broken lances run along perspective lines.

Some of the earliest known examples of computer-generated algorithmic art were created by Georg Nees, Frieder Nake, A. Michael Noll, Manfred Mohr and Vera Molnár in the early 1960s. These artworks were executed by a plotter controlled by a computer, and were therefore computer-generated art but not digital art. The act of creation lay in writing the program, which specified the sequence of actions to be performed by the plotter. Sonia Landy Sheridan established Generative Systems as a program at the School of the Art Institute of Chicago in 1970 in response to social change brought about in part by the computer-robot communications revolution. Her early work with copier and telematic art focused on the differences between the human hand and the algorithm.

Aside from the ongoing work of Roman Verostko and his fellow algorists, the next known examples are fractal artworks created in the mid to late 1980s. These are important here because they use a different means of execution. Whereas the earliest algorithmic art was "drawn" by a plotter, fractal art simply creates an image in computer memory; it is therefore digital art. The native form of a fractal artwork is an image stored on a computer –this is also true of very nearly all equation art and of most recent algorithmic art in general. However, in a stricter sense "fractal art" is not considered algorithmic art, because the algorithm is not devised by the artist.

In light of such ongoing developments, pioneer algorithmic artist Ernest Edmonds has documented the continuing prophetic role of art in human affairs by tracing the early 1960s association between art and the computer up to a present time in which the algorithm is now widely recognized as a key concept for society as a whole.

Role of the algorithm

Letter Field by Judson Rosebush, 1978. Calcomp plotter computer output with liquid inks on rag paper, 15.25 x 21 inches. This image was created using an early version of what became Digital Effects' Vision software, in APL and Fortran on an IBM 370/158. A database of the Souvenir font; random number generation, a statistical basis to determine letter size, color, and position; and a hidden line algorithm combine to produce this scan line raster image, output to a plotter.

From one point of view, for a work of art to be considered algorithmic art, its creation must include a process based on an algorithm devised by the artist. Here, an algorithm is simply a detailed recipe for the design and possibly execution of an artwork, which may include computer code, functions, expressions, or other input which ultimately determines the form the art will take. This input may be mathematical, computational, or generative in nature. Inasmuch as algorithms tend to be deterministic, meaning that their repeated execution would always result in the production of identical artworks, some external factor is usually introduced. This can either be a random number generator of some sort, or an external body of data (which can range from recorded heartbeats to frames of a movie.) Some artists also work with organically based gestural input which is then modified by an algorithm. By this definition, fractals made by a fractal program are not art, as humans are not involved. However, defined differently, algorithmic art can be seen to include fractal art, as well as other varieties such as those using genetic algorithms. The artist Kerry Mitchell stated in his 1999 Fractal Art Manifesto:

Fractal Art is not..Computer(ized) Art, in the sense that the computer does all the work. The work is executed on a computer, but only at the direction of the artist. Turn a computer on and leave it alone for an hour. When you come back, no art will have been generated.

Algorists

"Algorist" is a term used for digital artists who create algorithmic art.

Algorists formally began correspondence and establishing their identity as artists following a panel titled "Art and Algorithms" at SIGGRAPH in 1995. The co-founders were Jean-Pierre Hébert and Roman Verostko. Hébert is credited with coining the term and its definition, which is in the form of his own algorithm:

if (creation && object of art && algorithm && one's own algorithm) {
     return * an algorist *
} else {
     return * not an algorist *
}

Types

Morphogenetic Creations, a computer-generated digital art exhibition using programmed algorithms by Andy Lomas, at the Watermans Arts Centre, west London, 2016

Cellular automata can be used to generate artistic patterns with an appearance of randomness, or to modify images such as photographs by applying a transformation such as the stepping stone rule (to give an impressionist style) repeatedly until the desired artistic effect is achieved. Their use has also been explored in music.

Fractal art consists of varieties of computer-generated fractals with colouring chosen to give an attractive effect. Especially in the western world, it is not drawn or painted by hand. It is usually created indirectly with the assistance of fractal-generating software, iterating through three phases: setting parameters of appropriate fractal software; executing the possibly lengthy calculation; and evaluating the product. In some cases, other graphics programs are used to further modify the images produced. This is called post-processing. Non-fractal imagery may also be integrated into the artwork.

Genetic or evolutionary art makes use of genetic algorithms to develop images iteratively, selecting at each "generation" according to a rule defined by the artist.

Algorithmic art is not only produced by computers. Wendy Chun explains:

Software is unique in its status as metaphor for metaphor itself. As A universal imitator/machine, it encapsulates a logic of general substitutability; a logic of ordering and creative, animating disordering. Joseph Weizenbaum has argued that computers have become metaphors for "effective procedures," that is, for anything that can be solved in a prescribed number of steps, such as gene expression and clerical work.

The American artist, Jack Ox, has used algorithms to produce paintings that are visualizations of music without using a computer. Two examples are visual performances of extant scores, such as Anton Bruckner's Eighth Symphony and Kurt Schwitters' Ursonate. Later, she and her collaborator, Dave Britton, created the 21st Century Virtual Color Organ that does use computer coding and algorithms.

Since 1996 there have been ambigram generators that auto generate ambigrams.

Digital art

Irrational Geometrics digital art installation 2008 by Pascal Dombis
The Cave Automatic Virtual Environment at the University of Illinois, Chicago
The hybrid art 2007 combines an algorithmically generated image with an acrylic painting through Neural network. The cover art by Ryota Matsumoto for Postdigital Aesthetics: Art, Computation, and Design, London: Palgrave.
Linguistics River, 2012 MoMa educational net art project

Digital art refers to any artistic work or practice that uses digital technology as part of the creative or presentation process. It can also refer to computational art that uses and engages with digital media.

Since the 1960s, various names have been used to describe digital art, including computer art, electronic art, multimedia art and new media art.

History

Lillian Schwartz's Comparison of Leonardo's self-portrait and the Mona Lisa is based on Schwartz's Mona Leo. An example of a collage of digitally manipulated photographs

John Whitney developed the first computer-generated art in the early 1960s by utilizing mathematical operations to create art. In 1963, Ivan Sutherland invented the first user interactive computer-graphics interface known as Sketchpad. Between 1974 and 1977, Salvador Dalí created two big canvases of Gala Contemplating the Mediterranean Sea which at a distance of 20 meters is transformed into the portrait of Abraham Lincoln (Homage to Rothko) and prints of Lincoln in Dalivision based on a portrait of Abraham Lincoln processed on a computer by Leon Harmon published in "The Recognition of Faces". The technique is similar to what later became known as photographic mosaics.

Andy Warhol created digital art using an Amiga where the computer was publicly introduced at the Lincoln Center, New York, in July 1985. An image of Debbie Harry was captured in monochrome from a video camera and digitized into a graphics program called ProPaint. Warhol manipulated the image by adding color using flood fills.

Art that uses digital tools

Digital paintings are completed in much the same way as traditional ones.

Digital art can be purely computer-generated (such as fractals and algorithmic art) or taken from other sources, such as a scanned photograph or an image drawn using vector graphics software using a mouse or graphics tablet. Artworks are considered digital paintings when created similarly to non-digital paintings but using software on a computer platform and digitally outputting the resulting image as painted on canvas.

Amidst varied opinions on the pros and cons of digital technology on the arts, there seems to be a strong consensus within the digital art community that it has created a "vast expansion of the creative sphere", i.e., that it has greatly broadened the creative opportunities available to professional and non-professional artists alike.

Computer-generated visual media

Designer Madsen created a picture art generated by a picture generator: Midjourney. Named "Road"
A procedurally generated photorealistic landscape was created with Terragen. Terragen has been used in creating CGI for movies.

Digital visual art consists of either 2D visual information displayed on an electronic visual display or information mathematically translated into 3D information viewed through perspective projection on an electronic visual display. The simplest is 2D computer graphics which reflect how you might draw using a pencil and a piece of paper. In this case, however, the image is on the computer screen, and the instrument you draw with might be a tablet stylus or a mouse. What is generated on your screen might appear to be drawn with a pencil, pen, or paintbrush. The second kind is 3D computer graphics, where the screen becomes a window into a virtual environment, where you arrange objects to be "photographed" by the computer. Typically 2D computer graphics use raster graphics as their primary means of source data representations, whereas 3D computer graphics use vector graphics in the creation of immersive virtual reality installations. A possible third paradigm is to generate art in 2D or 3D entirely through the execution of algorithms coded into computer programs. This can be considered the native art form of the computer, and an introduction to the history of which is available in an interview with computer art pioneer Frieder Nake. Fractal art, Datamoshing, algorithmic art, and real-time generative art are examples.

Computer-generated 3D still imagery

3D graphics are created via the process of designing imagery from geometric shapes, polygons, or NURBS curves to create three-dimensional objects and scenes for use in various media such as film, television, print, rapid prototyping, games/simulations, and special visual effects.

There are many software programs for doing this. The technology can enable collaboration, lending itself to sharing and augmenting by a creative effort similar to the open source movement and the creative commons in which users can collaborate on a project to create art.

Pop surrealist artist Ray Caesar works in Maya (a 3D modeling software used for digital animation), using it to create his figures as well as the virtual realms in which they exist.

Computer-generated animated imagery

Computer-generated animations are animations created with a computer from digital models created by 3D artists or procedurally generated. The term is usually applied to works created entirely with a computer. Movies make heavy use of computer-generated graphics; they are called computer-generated imagery (CGI) in the film industry. In the 1990s and early 2000s, CGI advanced enough that, for the first time, it was possible to create realistic 3D computer animation, although films had been using extensive computer images since the mid-70s. A number of modern films have been noted for their heavy use of photo-realistic CGI.igital painting

Digital painting mainly refers to the process of creating paintings on computer software based on computers or graphic tables. Through pixel simulation, digital brushes in digital software (see the software in Digital painting) can imitate traditional painting paints and tools, such as oil, acrylic acid, pastel, charcoal, and airbrush. Users of the software can also customize the pixel size to achieve a unique visual effect (customized brushes).

Artificial intelligence art

Artists have used artificial intelligence to create artwork since at least the 1960s. Since their design in 2014, some artists have created artwork using a generative adversarial network (GAN), which is a machine learning framework that allows two "algorithms" to compete with each other and iterate. It is usually used to let the computer find the best solution by itself. It can be used to generate pictures that have visual effects similar to traditional fine art. The essential idea of image generators is that people can use text descriptions to let AI convert their text into visual picture content. Anyone can turn their language into a painting through a picture generator. And some artists can use image generators to generate their paintings instead of drawing from scratch, and then they use the generated paintings as a basis to improve them and finally create new digital paintings. This greatly reduces the threshold of painting and challenges the traditional definition of painting art.

Generation Process

Generally, the user can set the input, and the input content includes detailed picture content that the user wants. For example, the content can be a scene's content, characters, weather, character relationships, specific items, etc. It can also include selecting a specific artist style, screen style, image pixel size, brightness, etc. Then picture generators will return several similar pictures generated according to the input (generally, 4 pictures are given now). After receiving the results generated by picture generators, the user can select one picture as a result he wants or let the generator redraw and return to new pictures.

In addition, it is worth mentioning the whole process: it is also similar to the "generator" and "discriminator" modules in GANs.

Awards and recognition

In both 1991 and 1992, Karl Sims won the Golden Nica award at Prix Ars Electronica for his 3D AI animated videos using artificial evolution.

In 2009, Eric Millikin won the Pulitzer Prize along with several other awards for his artificial intelligence art that was critical of government corruption in Detroit and resulted in the city's mayor being sent to jail.

In 2018 Christie's auction house in New York sold an artificial intelligence work, "Edmond de Bellamy" for US$432,500. It was created by a collective in Paris named "Obvious".

In 2019, Stephanie Dinkins won the Creative Capital award for her creation of an evolving artificial intelligence based on the "interests and culture(s) of people of color."

Also in 2019, Sougwen Chung won the Lumen Prize for her performances with a robotic arm that uses AI to attempt to draw in a manner similar to Chung.

In 2022, an amateur artist using Midjourney won the first-place $300 prize in a digital art competition at the Colorado State Fair.

Also in 2022, Refik Anadol created an artificial intelligence art installation at the Museum of Modern Art in New York, based on the museum's own collection.

Art made for digital media

In contemporary art, the term digital art is used primarily to describe visual art that is made with digital tools, and also is highly computational, and explicitly engages with digital technologies. Art historian Christiane Paul writes that it "is highly problematic to classify all art that makes use of digital technologies somewhere in its production and dissemination process as digital art since it makes it almost impossible to arrive at any unifying statement about the art form.

Digital installation art

Boundary Functions at the Tokyo Intercommunications Center, 1999.
Boundary Functions (1998) interactive floor projection by Scott Snibbe at the NTT InterCommunication Center in Tokyo

Digital installation art constitutes a broad field of activity and incorporates many forms. Some resemble video installations, particularly large-scale works involving projections and live video capture. By using projection techniques that enhance an audience's impression of sensory envelopment, many digital installations attempt to create immersive environments. Others go even further and attempt to facilitate a complete immersion in virtual realms. This type of installation is generally site-specific, scalable, and without fixed dimensionality, meaning it can be reconfigured to accommodate different presentation spaces.

Noah Wardrip-Fruin's "Screen" (2003) is an example of interactive digital installation art which makes use of a Cave Automatic Virtual Environment to create an interactive experience. Scott Snibbe's "Boundary Functions" is an example of augmented reality digital installation art, which response to people who enter the installation by drawing lines between people, indicating their personal space.

Internet art and net.art

Internet art is digital art that uses the specific characteristics of the internet and is exhibited on the internet.

Digital art and blockchain

Blockchain, and more specifically NFTs, are associated with digital art since the NFTs craze of 2020 and 2021. Digital art is a common use case for NFTs. By minting a piece of digital art the owner of the NFT is proven to be the owner of the art piece. While the technology received many critics and has many flaws related to plagiarism and fraud (due to its almost completely unregulated nature), auction houses like Sotheby's, Christie's and various museums and galleries in the world started collaborations and partnerships with digital artists, selling NFTs associated with digital artworks (via NFT platforms) and showcasing those artworks (associated to the respective NFTs) both in virtual galleries and real-life screens, monitors and TVs.

Art theorists and historians

Notable art theorists and historians in this field include Oliver Grau, Jon Ippolito, Christiane Paul, Frank Popper, Jasia Reichardt, Mario Costa, Christine Buci-Glucksmann, Dominique Moulon, Robert C. Morgan, Roy Ascott, Catherine Perret, Margot Lovejoy, Edmond Couchot, Fred Forest and Edward A. Shanken.

Scholarship and archives

In addition to the creation of original art, research methods that utilize AI have been generated to quantitatively analyze digital art collections. This has been made possible due to the large-scale digitization of artwork in the past few decades. Although the main goal of digitization was to allow for accessibility and exploration of these collections, the use of AI in analyzing them has brought about new research perspectives.

Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art. Close reading focuses on specific visual aspects of one piece. Some tasks performed by machines in close reading methods include computational artist authentication and analysis of brushstrokes or texture properties. In contrast, through distant viewing methods, the similarity across an entire collection for a specific feature can be statistically visualized. Common tasks relating to this method include automatic classification, object detection, multimodal tasks, knowledge discovery in art history, and computational aesthetics. Whereas distant viewing includes the analysis of large collections, close reading involves one piece of artwork.

Whilst 2D and 3D digital art is beneficial as it allows the preservation of history that would otherwise have been destroyed by events like natural disasters and war, there is the issue of who should own these 3D scans – i.e., who should own the digital copyrights.

Subtypes

Related organizations and conferences

Cultural tourism

From Wikipedia, the free encyclopedia
Cultural tourism in Egypt in the 19th century.
Tourists at Hearst Castle, California.
Tourists taking pictures at the khmer Pre Rup temple ruins, an example of cultural tourism.

Cultural tourism is a type of tourism activity in which the visitor's essential motivation is to learn, discover, experience and consume the tangible and intangible cultural attractions/products in a tourism destination. These attractions/products relate to a set of distinctive material, intellectual, spiritual, and emotional features of a society that encompasses arts and architecture, historical and cultural heritage, culinary heritage, literature, music, creative industries and the living cultures with their lifestyles, value systems, beliefs and traditions.

Overview

Cultural tourism experiences include architectural and archaeological treasures, culinary activities, festivals or events, historic or heritage, sites, monuments and landmarks, museums and exhibitions, national parks and wildlife sanctuaries, religious venues, temples and churches. It includes tourism in urban areas, particularly historic or large cities and their cultural facilities such as theatres. In the twenty-first-century United States, national parks and a limited number of Native American councils continue to promote "tribal tourism." The U.S. National Park Service has publicly endorsed this strain of cultural tourism, despite lingering concerns over exploitation and the potential hazards of ecotourism in Native America.

Proponents of cultural tourism say that it gives the local population the opportunity to benefit financially from their cultural heritage and thus to appreciate and preserve it, while giving visitors the opportunity to broaden their personal horizons. Cultural tourism also has negative sides. There may be negative effects on local residents, such as making the local economy unstable, increasing the cost of living for local residents, increasing pollution or creating environmental problems. The local economy can also be destabilized due to the rapid change in population size. The local population also comes into contact with new ways of life that can disrupt their social fabric.

This form of tourism is also becoming generally more popular throughout the world, and a recent OECD report has highlighted the role that cultural tourism can play in regional development in different world regions. Cultural tourism has been also defined as 'the movement of persons to cultural attractions away from their normal place of residence, with the intention to gather new information and experiences to satisfy their cultural needs'. Nowadays, cultural tourism has recently shifted in the nature of demand with a growing desire for cultural "experiences" in particular. Additionally, cultural and heritage tourism experiences appear to be a potentially key component of memorable tourism experiences.

Destinations

A decorated water well in Zalipie, Poland
Tourists at the cultural historical Old Town of Porvoo

One type of cultural tourism destination is living cultural areas. Visiting any culture other than one's own such as traveling to a foreign country. Other destinations include historical sites, modern urban districts, "ethnic pockets" of town or village, fairs/festivals, theme parks, and natural ecosystems. Buczkowska distinguishes sectors of cultural tourism due to 1) the route or destination of the trip (study trips or cultural trips - thematic field trips, urban tourism, rural tourism), or 2) the theme of the trip undertaken (tourism of cultural heritage, tourism of contemporary culture, tourism of cultural heritage and contemporary culture). It has been shown that cultural attractions and events are particularly strong magnets for tourism. In light of this, many cultural districts add visitor services to key cultural areas to bolster tourist activity. The term cultural tourism is used for journeys that include visits to cultural resources, regardless of whether it is tangible or intangible cultural resources, and regardless of the primary motivation. In order to understand properly the concept of cultural tourism, it is necessary to know the definitions of a number terms such as, for example, culture, tourism, cultural economy, cultural and tourism potentials, cultural and tourist offer, and others.

Creative tourism

Creative tourism is a new type of tourism, recently theorized and defined by Greg Richards and Crispin Raymond in 2000. They defined creative tourism as: “Tourism which offers visitors the opportunity to develop their creative potential through active participation in courses and learning experiences, which are characteristic of the holiday destination where they are taken." (Richards, Greg et Raymond, Crispin, 2000). Creative Tourism involves active participation from tourists in cultural experiences specific to each holiday destination.

This type of tourism is opposed to mass tourism and allows the destinations to diversify and offer innovative activities different from other destinations.

Similarly, UNESCO launched in 2004 a program entitled Creative Cities Network. This network aims to highlight cities around the world that are putting creativity at the heart of their sustainable urban development plan. Creative cities are organized into seven categories representing seven different creative fields: crafts and folk arts, digital arts, film, design, gastronomy, literature, and music. As of January 2020, the network has 246 members across all categories. In order to promote the development of this new type of tourism, a non-profit organization was created in Barcelona in 2010: Creative Tourism Network. Its missions involve, among others: the promotion of creative tourism, the creation of a network of “Creativefriendly” cities but also awards celebration, The Creative Tourism Awards.

Personalized medicine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Personalized_medicine   Personalized medicine , also referred to...