Search This Blog

Sunday, September 13, 2020

Generative art

From Wikipedia, the free encyclopedia
 
Condensation Cube, plexiglas and water; Hirshhorn Museum and Sculpture Garden, begun 1965, completed 2008 by Hans Haacke
 
Iridem for trombone and clarinet, 1983 by Sergio Maltagliati
 
Interactive installation 'CIMs series, 2000 by Maurizio Bolognini
 
Installation view of Irrational Geometrics 2008 by Pascal Dombis
 
Telepresence-based installation 10.000 Moving Cities, 2016 by Marc Lee
 
Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist. In some cases the human creator may claim that the generative system represents their own artistic idea, and in others that the system takes on the role of the creator.

"Generative art" often refers to algorithmic art (algorithmically determined computer generated artwork), but artists can also make it using systems of chemistry, biology, mechanics and robotics, smart materials, manual randomization, mathematics, data mapping, symmetry, tiling, and more.
You can see several modern examples of generative art here.

History

The use of the word "generative" in the discussion of art has developed over time. The use of "Artificial DNA" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use of autonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named "emergent". Margaret Boden and Ernest Edmonds have noted the use of the term "generative art" in the broad context of automated computer graphics in the 1960s, beginning with artwork exhibited by Georg Nees and Frieder Nake in 1965:
The terms "generative art" and "computer art" have been used in tandem, and more or less interchangeably, since the very earliest days.
The first such exhibition showed the work of Nees in February 1965, which some claim was titled "Generative Computergrafik". While Nees does not himself remember, this was the title of his doctoral thesis published a few years later. The correct title of the first exhibition and catalog was "computer-grafik". "Generative art" and related terms was in common use by several other early computer artists around this time, including Manfred Mohr. The term "Generative Art" with the meaning of dynamic artwork-systems able to generate multiple artwork-events was clearly used the first time for the "Generative Art" conference in Milan in 1998. 

The term has also been used to describe geometric abstract art where simple elements are repeated, transformed, or varied to generate more complex forms. Thus defined, generative art was practised by the Argentinian artists Eduardo McEntyre and Miguel Ángel Vidal in the late 1960s. In 1972 the Romanian-born Paul Neagu created the Generative Art Group in Britain. It was populated exclusively by Neagu using aliases such as "Hunsy Belmood" and "Edward Larsocchi." In 1972 Neagu gave a lecture titled 'Generative Art Forms' at the Queen's University, Belfast Festival.

In 1970 the School of the Art Institute of Chicago created a department called "Generative Systems." As described by Sonia Landy Sheridan the focus was on art practices using the then new technologies for the capture, inter-machine transfer, printing and transmission of images, as well as the exploration of the aspect of time in the transformation of image information.

In 1988 Clauser identified the aspect of systemic autonomy as a critical element in generative art:
It should be evident from the above description of the evolution of generative art that process (or structuring) and change (or transformation) are among its most definitive features, and that these features and the very term 'generative' imply dynamic development and motion. ... (the result) is not a creation by the artist but rather the product of the generative process - a self-precipitating structure.
In 1989 Celestino Soddu defined the Generative Design approach to Architecture and Town Design in his book Citta' Aleatorie.

In 1989 Franke referred to "generative mathematics" as "the study of mathematical operations suitable for generating artistic images."

From the mid-1990s Brian Eno popularized the terms generative music and generative systems, making a connection with earlier experimental music by Terry Riley, Steve Reich and Philip Glass.

From the end of the 20th century, communities of generative artists, designers, musicians and theoreticians began to meet, forming cross-disciplinary perspectives. The first meeting about generative Art was in 1998, at the inaugural International Generative Art conference at Politecnico di Milano University, Italy. In Australia, the Iterate conference on generative systems in the electronic arts followed in 1999. On-line discussion has centred around the eu-gene mailing list, which began late 1999, and has hosted much of the debate which has defined the field. These activities have more recently been joined by the Generator.x conference in Berlin starting in 2005. In 2012 the new journal GASATHJ, Generative Art Science and Technology Hard Journal was founded by Celestino Soddu and Enrica Colabella jointing several generative artists and scientists in the Editorial Board.

Some have argued that as a result of this engagement across disciplinary boundaries, the community has converged on a shared meaning of the term. As Boden and Edmonds put it in 2011:
Today, the term "Generative Art" is still current within the relevant artistic community. Since 1998 a series of conferences have been held in Milan with that title (Generativeart.com), and Brian Eno has been influential in promoting and using generative art methods (Eno, 1996). Both in music and in visual art, the use of the term has now converged on work that has been produced by the activation of a set of rules and where the artist lets a computer system take over at least some of the decision-making (although, of course, the artist determines the rules).
In the call of the Generative Art conferences in Milan (annually starting from 1998), the definition of Generative Art by Celestino Soddu:
Generative Art is the idea realized as genetic code of artificial events, as construction of dynamic complex systems able to generate endless variations. Each Generative Project is a concept-software that works producing unique and non-repeatable events, like music or 3D Objects, as possible and manifold expressions of the generating idea strongly recognizable as a vision belonging to an artist / designer / musician / architect /mathematician.
Discussion on the eu-gene mailing list was framed by the following definition by Adrian Ward from 1999:
Generative art is a term given to work which stems from concentrating on the processes involved in producing an artwork, usually (although not strictly) automated by the use of a machine or computer, or by using mathematic or pragmatic instructions to define the rules by which such artworks are executed.
A similar definition is provided by Philip Galanter:
Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art.

Types

Music

Johann Philipp Kirnberger's "Musikalisches Würfelspiel" (Musical Dice Game) 1757 is considered an early example of a generative system based on randomness. Dice were used to select musical sequences from a numbered pool of previously composed phrases. This system provided a balance of order and disorder. The structure was based on an element of order on one hand, and disorder on the other.

The fugues of J.S. Bach could be considered generative, in that there is a strict underlying process that is followed by the composer. Similarly, serialism follows strict procedures which, in some cases, can be set up to generate entire compositions with limited human intervention.

Composers such as John Cage, Farmers Manual and Brian Eno have used generative systems in their works.

Visual art

The artist Ellsworth Kelly created paintings by using chance operations to assign colors in a grid. He also created works on paper that he then cut into strips or squares and reassembled using chance operations to determine placement.

Album de 10 sérigraphies sur 10 ans, by François Morellet, 2009
 
Iapetus, by Jean-Max Albert, 1985
 
Calmoduline Monument, by Jean-Max Albert, 1991
 
Artists such as Hans Haacke have explored processes of physical and social systems in artistic context. François Morellet has used both highly ordered and highly disordered systems in his artwork. Some of his paintings feature regular systems of radial or parallel lines to create Moiré Patterns. In other works he has used chance operations to determine the coloration of grids.  Sol LeWitt created generative art in the form of systems expressed in natural language and systems of geometric permutation. Harold Cohen's AARON system is a longstanding project combining software artificial intelligence with robotic painting devices to create physical artifacts. Steina and Woody Vasulka are video art pioneers who used analog video feedback to create generative art. Video feedback is now cited as an example of deterministic chaos, and the early explorations by the Vasulkas anticipated contemporary science by many years. Software systems exploiting evolutionary computing to create visual form include those created by Scott Draves and Karl Sims. The digital artist Joseph Nechvatal has exploited models of viral contagion.  Autopoiesis by Ken Rinaldo includes fifteen musical and robotic sculptures that interact with the public and modify their behaviors based on both the presence of the participants and each other. Jean-Pierre Hebert and Roman Verostko are founding members of the Algorists, a group of artists who create their own algorithms to create art. A. Michael Noll, of Bell Telephone Laboratories, Incorporated, programmed computer art using mathematical equations and programmed randomness, starting in 1962. The French artist Jean-Max Albert, beside environmental sculptures like Iapetus, and O=C=O, developed a project dedicated to the vegetation itself, in terms of biological activity. The Calmoduline Monument project is based on the property of a protein, calmodulin, to bond selectively to calcium. Exterior physical constraints (wind, rain, etc.) modify the electric potential of the cellular membranes of a plant and consequently the flux of calcium. However, the calcium controls the expression of the calmoduline gene. The plant can thus, when there is a stimulus, modify its « typical » growth pattern. So the basic principle of this monumental sculpture is that to the extent that they could be picked up and transported, these signals could be enlarged, translated into colors and shapes, and show the plant's « decisions » suggesting a level of fundamental biological activity.

Maurizio Bolognini works with generative machines to address conceptual and social concerns. Mark Napier is a pioneer in data mapping, creating works based on the streams of zeros and ones in ethernet traffic, as part of the "Carnivore" project. Martin Wattenberg pushed this theme further, transforming "data sets" as diverse as musical scores (in "Shape of Song", 2001) and Wikipedia edits (History Flow, 2003, with Fernanda Viegas) into dramatic visual compositions. The Canadian artist San Base developed a "Dynamic Painting" algorithm in 2002. Using computer algorithms as "brush strokes," Base creates sophisticated imagery that evolves over time to produce a fluid, never-repeating artwork.

Since 1996 there have been ambigram generators that auto generate ambigrams

Software art

For some artists, graphic user interfaces and computer code have become an independent art form in themselves. Adrian Ward created Auto-Illustrator as a commentary on software and generative methods applied to art and design.

Architecture

In 1987 Celestino Soddu created the artificial DNA of Italian Medieval towns able to generate endless 3D models of cities identifiable as belonging to the idea.

In 2010, Michael Hansmeyer generated architectural columns in a project called "Subdivided Columns – A New Order (2010)". The piece explored how the simple process of repeated subdivision can create elaborate architectural patterns. Rather than designing any columns directly, Hansmeyer designed a process that produced columns automatically. The process could be run again and again with different parameters to create endless permutations. Endless permutations could be considered a hallmark of generative design. 

Literature

Writers such as Tristan Tzara, Brion Gysin, and William Burroughs used the cut-up technique to introduce randomization to literature as a generative system. Jackson Mac Low produced computer-assisted poetry and used algorithms to generate texts; Philip M. Parker has written software to automatically generate entire books. Jason Nelson used generative methods with speech-to-text software to create a series of digital poems from movies, television and other audio sources.

Live coding

Generative systems may be modified while they operate, for example by using interactive programming environments such as SuperCollider, Fluxus and TidalCycles, including patching environments such as Max/MSP, Pure Data and vvvv. This is a standard approach to programming by artists, but may also be used to create live music and/or video by manipulating generative systems on stage, a performance practice that has become known as live coding. As with many examples of software art, because live coding emphasises human authorship rather than autonomy, it may be considered in opposition to generative art.

Theories

Philip Galanter

In the most widely cited theory of generative art, in 2003 Philip Galanter describes generative art systems in the context of complexity theory. In particular the notion of Murray Gell-Mann and Seth Lloyd's effective complexity is cited. In this view both highly ordered and highly disordered generative art can be viewed as simple. Highly ordered generative art minimizes entropy and allows maximal data compression, and highly disordered generative art maximizes entropy and disallows significant data compression. Maximally complex generative art blends order and disorder in a manner similar to biological life, and indeed biologically inspired methods are most frequently used to create complex generative art. This view is at odds with the earlier information theory influenced views of Max Bense and Abraham Moles where complexity in art increases with disorder. 

Galanter notes further that given the use of visual symmetry, pattern, and repetition by the most ancient known cultures generative art is as old as art itself. He also addresses the mistaken equivalence by some that rule-based art is synonymous with generative art. For example, some art is based on constraint rules that disallow the use of certain colors or shapes. Such art is not generative because constraint rules are not constructive, i.e. by themselves they don't assert what is to be done, only what cannot be done.

Margaret Boden and Ernest Edmonds

In their 2009 article, Margaret Boden and Ernest Edmonds agree that generative art need not be restricted to that done using computers, and that some rule-based art is not generative. They develop a technical vocabulary that includes Ele-art (electronic art), C-art (computer art), D-art (digital art), CA-art (computer assisted art), G-art (generative art), CG-art (computer based generative art), Evo-art (evolutionary based art), R-art (robotic art), I-art (interactive art), CI-art (computer based interactive art), and VR-art (virtual reality art).

Questions

The discourse around generative art can be characterised by the theoretical questions which motivate its development. McCormack et al. propose the following questions, shown with paraphrased summaries, as the most important:
  1. Can a machine originate anything? Related to machine intelligence - can a machine generate something new, meaningful, surprising and of value: a poem, an artwork, a useful idea, a solution to a long-standing problem?
  2. What is it like to be a computer that makes art? If a computer could originate art, what would it be like from the computer's perspective?
  3. Can human aesthetics be formalised?
  4. What new kinds of art does the computer enable? Many generative artworks do not involve digital computers, but what does generative computer art bring that is new?
  5. In what sense is generative art representational, and what is it representing?
  6. What is the role of randomness in generative art? For example, what does the use of randomness say about the place of intentionality in the making of art?
  7. What can computational generative art tell us about creativity? How could generative art give rise to artefacts and ideas that are new, surprising and valuable?
  8. What characterises good generative art? How can we form a more critical understanding of generative art?
  9. What can we learn about art from generative art? For example, can the art world be considered a complex generative system involving many processes outside the direct control of artists, who are agents of production within a stratified global art market.
  10. What future developments would force us to rethink our answers?
Another question is of postmodernism—are generative art systems the ultimate expression of the postmodern condition, or do they point to a new synthesis based on a complexity-inspired world-view?

Electronic literature

From Wikipedia, the free encyclopedia
 
Electronic literature or digital literature is a genre of literature encompassing works created exclusively on and for digital devices, such as computers, tablets, and mobile phones. A work of electronic literature can be defined as "a construction whose literary aesthetics emerge from computation", "work that could only exist in the space for which it was developed/written/coded—the digital space". This means that these writings cannot be easily printed, or cannot be printed at all, because elements crucial to the text are unable to be carried over onto a printed version. The digital literature world continues to innovate print's conventions all the while challenging the boundaries between digitized literature and electronic literature. Some novels are exclusive to tablets and smartphones for the simple fact that they require a touchscreen. Digital literature tends to require a user to traverse through the literature through the digital setting, making the use of the medium part of the literary exchange. Espen J. Aarseth wrote in his book Cybertext: Perspectives on Ergodic Literature that "it is possible to explore, get lost, and discover secret paths in these texts, not metaphorically, but through the topological structures of the textual machinery".

Definitions

It is difficult to accurately define electronic literature. The phrase itself consists of two words, each with their own specific meanings. Arthur Krystal in What Is Literature explains that "lit(t)eratura referred to any writing formed with letters". However, Krystal goes on to explore what literature has transformed into: "a record of one human being's sojourn on earth, proffered in verse or prose that artfully weaves together knowledge of the past with a heightened awareness of the present in ever new verbal configurations". Electronic denotes anything "of, relating to, or being a medium...by which information is transmitted electronically". Thus electronic literature can be considered a branch from the main tree of literature. Katherine Hayles discusses the topic in the online article Electronic Literature: What Is It. She argues "electronic literature, generally considered to exclude print literature that has been digitized, is by contrast 'digital born', and (usually) meant to be read on a computer". A definition offered by the Electronic Literature Organization (ELO) states electronic literature "refers to works with an important literary aspect that takes advantage of the capabilities and contexts provided by the stand-alone or networked computer".




On its official website, the ELO offers this additional definition of electronic literature as consisting of works which are:

  • E-books, hypertext and poetry, on and off of the Web
  • Animated poetry presented in graphical forms, for example Flash and other platforms
  • Computer art installations, which ask viewers to read them or otherwise have literary aspects
  • Conversational characters, also known as chatterbots
  • Interactive
  • Novels that take the form of emails, SMS messages, or blogs
  • Poems and stories that are generated by computers, either interactively or based on parameters given at the beginning
  • Collaborative writing projects that allow readers to contribute to the text of a work
  • Literary performances online that develop new ways of writing
While the ELO definition incorporates many aspects that are applied in digital literature, the definition lacks any solid guidelines and also fails to recognize literature created on social media platforms including Twitterature. With the apparent vagueness, many debate on what truly qualifies as a piece of e-literature. A large number of works fall through the cracks of the imprecise characteristics that generally make up electronic literature.

History

A gradual transition into the digital world began with new advancements in technology to makes things more efficient and accessible. This is comparable to the release of the printing press in the 15th century, as people did not consider it a major contributor to literature at first. In the 1960s and 1970s, the creation of the personal computer allowed people to begin expanding literature into the electronic realm.

Predecessors

In 1877, spoken word recordings began with the invention of the phonograph. In the 1930s, the first "talking book" recordings were made to hold short stories and book chapters. The 1970s were when the term "audiobook" became part of the vernacular as cassette tapes entered the public. 1971 was the year officially accepted as the year of the first e-book. Although there were several contenders to the invention of an "electronic book" prior to this, Michael Hart, the founder of the Gutenberg Project, has been accepted as the official inventor of the e-book after creating a digital copy of the Declaration of Independence.

Early history

In 1975–76, Will Crowther programmed a text game named Colossal Cave Adventure (also known as Adventure). Considered one of the earlier computer adventure games, it possessed a story that had the reader make choices on which way to go. These choices could lead the reader to the end, or to his or her untimely death. This non-linear format was later mimicked by the text adventure game, Zork, created by a group of MIT students in 1977–79. These two games are considered to be the first examples of interactive fiction as well as some of the earliest video games. The earliest pieces of electronic literature as presently defined were created using Storyspace, software developed by Jay David Bolter and Michael Joyce in the 1980s. They sold the software in 1990 to Eastgate Systems, a small software company that has maintained and updated the code in Storyspace up to the present. Storyspace and other similar programs use hypertext to create links within text. Literature using hypertext is frequently referred to as hypertext fiction. Originally, these stories were often disseminated on discs and later on CD. Hypertext fiction is still being created today using not only Storyspace, but other programs such as Twine.

Modern

While hypertext fiction is still being made and interactive fiction created with text stories and images, there is a discussion over the term, "literature" being used to describe video games. Though Adventure and Zork are considered video games, advancements in technology have evolved video gaming mediums from text to action and back to text. More often than not, video games are told as interactive literature where the player makes choices and alters the outcome of the story. The video game Mass Effect's story is entirely based around these choices, and Mass Effect 3 is an even better example, changing character interactions with the player character and how the game ends is based on the player's actions.

In other instances the games are a story and the player exists to move the plot along. Journey, a game by Thatgamecompany released in 2012 for the PlayStation 3, is more story than game. The titular "journey" is the trek the player takes from start to finish as a character with limited mobility and world interaction. While the player can play with one other player at a time on the network, they cannot communicate through traditional means. With no actual words, this game takes the player through a world from prologue to epilogue.

In Espen Aarseth's Cybertext: Perspectives on Ergodic Literature, he defines "ergodic literature" as literature where "nontrivial effort is required to allow the reader to traverse the text". An example from Aarseth states, "Since writing always has been a spatial activity, it is reasonable to assume that ergodic textuality has been practiced as long as linear writing. For instance, the wall inscriptions of the temples in ancient Egypt were often connected two-dimensionally (on one wall) or three-dimensionally (from wall to wall and from room to room), and this layout allowed a nonlinear arrangement of the religious text in accordance with the symbolic architectural layout of the temple." Using these examples hypertext fiction and interactive fiction can be considered ergodic literature, and under the umbrella of interactive fiction, so can video games. Electronic literature continues to evolve.

Preservation and archiving

Electronic literature, according to Hayles, becomes unplayable after a decade or less due to the "fluid nature of media". Therefore, electronic literature risks losing the opportunity to build the "traditions associated with print literature". On the other hand, classics such as Michael Joyce's afternoon, a story (1987) are still read and have been republished on CD, while simple HTML hypertext fictions from the 1990s are still accessible online and can be read in modern browsers.

Several organizations are dedicated to preserving works of electronic literature. The UK-based Digital Preservation Coalition aims to preserve digital resources in general, while the Electronic Literature Organization's PAD (Preservation / Archiving / Dissemination) initiative gave recommendations on how to think ahead when writing and publishing electronic literature, as well as how to migrate works running on defunct platforms to current technologies.

The Electronic Literature Collection is a series of anthologies of electronic literature published by the Electronic Literature Organization, both on CD/DVD and online, and this is another strategy in working to make sure that electronic literature is available for future generations.

The Maryland Institute for Technologies in the Humanities and the Electronic Literature Lab at Washington State University - Vancouver also work towards the documentation and preservation of electronic literature and hypermedia.

Notable people and works

Noteworthy authors, critics, and works associated with electronic literature include: 

Robert Coover, a professor of creative writing at Brown University, helped bring Talan Memmott to the university as its first graduate fellow of electronic writing.

Pry, a novella, a collaboration between Danny Cannizzaro and Samantha Gorman (also known as Tender Claws). It is an electronic literature application for phones and tablets. By utilizing the touch-based gestures used on tablets, Pry proves to be a very dynamic approach to the emerging e-lit genre. The use of these gestures allow the reader to dig beneath the story at the surface of Pry. 

Game, game, game and again game (2008), Nothing you have done deserves such praise (2013), I made this. you play this. we are enemies (2009), and Scrape Scraperteeth (2011) are important examples of the intersection of games and poetry. They are created by digital poet and net-artist Jason Nelson whose career has been devoting to exploring interface, interactivity, and surrealism within electronic literature.

Hypertext

From Wikipedia, the free encyclopedia

Engineer Vannevar Bush wrote "As We May Think" in 1945 in which he described the Memex, a theoretical proto-hypertext device which in turn helped inspire the subsequent invention of hypertext.
 
Douglas Engelbart in 2009, at the 40th anniversary celebrations of "The Mother of All Demos" in San Francisco, a 90-minute 1968 presentation of the NLS computer system which was a combination of hardware and software that demonstrated many hypertext ideas.
 
Hypertext is text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access. Hypertext documents are interconnected by hyperlinks, which are typically activated by a mouse click, keypress set or by touching the screen. Apart from text, the term "hypertext" is also sometimes used to describe tables, images, and other presentational content formats with integrated hyperlinks. Hypertext is one of the key underlying concepts of the World Wide Web, where Web pages are often written in the Hypertext Markup Language (HTML). As implemented on the Web, hypertext enables the easy-to-use publication of information over the Internet.

Etymology

"(...)'Hypertext' is a recent coinage. 'Hyper-' is used in the mathematical sense of extension and generality (as in 'hyperspace,' 'hypercube') rather than the medical sense of 'excessive' ('hyperactivity'). There is no implication about size— a hypertext could contain only 500 words or so. 'Hyper-' refers to structure and not size."
The English prefix "hyper-" comes from the Greek prefix "ὑπερ-" and means "over" or "beyond"; it has a common origin with the prefix "super-" which comes from Latin. It signifies the overcoming of the previous linear constraints of written text.
The term "hypertext" is often used where the term "hypermedia" might seem appropriate. In 1992, author Ted Nelson – who coined both terms in 1963 – wrote:
By now the word "hypertext" has become generally accepted for branching and responding text, but the corresponding word "hypermedia", meaning complexes of branching and responding graphics, movies and sound – as well as text – is much less used. Instead they use the strange term "interactive multimedia": this is four syllables longer, and does not express the idea of extending hypertext.

Types and uses of hypertext

Hypertext documents can either be static (prepared and stored in advance) or dynamic (continually changing in response to user input, such as dynamic web pages). Static hypertext can be used to cross-reference collections of data in documents, software applications, or books on CDs. A well-constructed system can also incorporate other user-interface conventions, such as menus and command lines. Links used in a hypertext document usually replace the current piece of hypertext with the destination document. A lesser known feature is StretchText, which expands or contracts the content in place, thereby giving more control to the reader in determining the level of detail of the displayed document. Some implementations support transclusion, where text or other content is included by reference and automatically rendered in place.

Hypertext can be used to support very complex and dynamic systems of linking and cross-referencing. The most famous implementation of hypertext is the World Wide Web, written in the final months of 1990 and released on the Internet in 1991.

History

In 1941, Jorge Luis Borges published "The Garden of Forking Paths", a short story that is often considered an inspiration for the concept of hypertext.

In 1945, Vannevar Bush wrote an article in The Atlantic Monthly called "As We May Think", about a futuristic proto-hypertext device he called a Memex. A Memex would hypothetically store - and record - content on reels of microfilm, using electric photocells to read coded symbols recorded next to individual microfilm frames while the reels spun at high speed, stopping on command. The coded symbols would enable the Memex to index, search, and link content to create and follow associative trails. Because the Memex was never implemented and could only link content in a relatively crude fashion — by creating chains of entire microfilm frames — the Memex is now regarded not only as a proto-hypertext device, but it is fundamental to the history of hypertext because it directly inspired the invention of hypertext by Ted Nelson and Douglas Engelbart. 

Ted Nelson gives a presentation on Project Xanadu, a theoretical hypertext model conceived in the 1960s whose first and incomplete implementation was first published in 1998.
 
In 1963, Ted Nelson coined the terms 'hypertext' and 'hypermedia' as part of a model he developed for creating and using linked content (first published reference 1965). He later worked with Andries van Dam to develop the Hypertext Editing System (text editing) in 1967 at Brown University. It was implemented using the terminal IBM 2250 with a light pen which was provided as a pointing device. By 1976, its successor FRESS was used in a poetry class in which students could browse a hyperlinked set of poems and discussion by experts, faculty and other students, in what was arguably the world’s first online scholarly community which van Dam says "foreshadowed wikis, blogs and communal documents of all kinds". Ted Nelson said in the 1960s that he began implementation of a hypertext system he theorized, which was named Project Xanadu, but his first and incomplete public release was finished much later, in 1998.

Douglas Engelbart independently began working on his NLS system in 1962 at Stanford Research Institute, although delays in obtaining funding, personnel, and equipment meant that its key features were not completed until 1968. In December of that year, Engelbart demonstrated a 'hypertext' (meaning editing) interface to the public for the first time, in what has come to be known as "The Mother of All Demos". 

ZOG_(hypertext), an early hypertext system, was developed at Carnegie Mellon University during the 1970s, used for documents on Nimitz class aircraft carriers, and later evolving as KMS_(hypertext) (Knowledge Management System). 

The first hypermedia application is generally considered to be the Aspen Movie Map, implemented in 1978. The Movie Map allowed users to arbitrarily choose which way they wished to drive in a virtual cityscape, in two seasons (from actual photographs) as well as 3-D polygons. 

In 1980, Tim Berners-Lee created ENQUIRE, an early hypertext database system somewhat like a wiki but without hypertext punctuation, which was not invented until 1987. The early 1980s also saw a number of experimental "hyperediting" functions in word processors and hypermedia programs, many of whose features and terminology were later analogous to the World Wide Web. Guide, the first significant hypertext system for personal computers, was developed by Peter J. Brown at UKC in 1982. 

In 1980 Roberto Busa, an Italian Jesuit priest and one of the pioneers in the usage of computers for linguistic and literary analysis, published the Index Thomisticus, as a tool for performing text searches within the massive corpus of Aquinas's works. Sponsored by the founder of IBM, Thomas J. Watson, the project lasted about 30 years (1949-1980), and eventually produced the 56 printed volumes of the Index Thomisticus the first important hypertext work about Saint Thomas Aquinas books and of a few related authors.

In 1983, Ben Shneiderman at the University of Maryland Human - Computer Interaction Lab led a group that developed the HyperTies system that was commercialized by Cognetics Corporation. Hyperties was used to create the July 1988 issue of the Communications of the ACM as a hypertext document and then the first commercial electronic book Hypertext Hands-On! 

In August 1987, Apple Computer released HyperCard for the Macintosh line at the MacWorld convention. Its impact, combined with interest in Peter J. Brown's GUIDE (marketed by OWL and released earlier that year) and Brown University's Intermedia, led to broad interest in and enthusiasm for hypertext, hypermedia, databases, and new media in general. The first ACM Hypertext (hyperediting and databases) academic conference took place in November 1987, in Chapel Hill NC, where many other applications, including the branched literature writing software Storyspace, were also demonstrated.

Meanwhile, Nelson (who had been working on and advocating his Xanadu system for over two decades) convinced Autodesk to invest in his revolutionary ideas. The project continued at Autodesk for four years, but no product was released.

In 1989, Tim Berners-Lee, then a scientist at CERN, proposed and later prototyped a new hypertext project in response to a request for a simple, immediate, information-sharing facility, to be used among physicists working at CERN and other academic institutions. He called the project "WorldWideWeb".
HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will. Potentially, HyperText provides a single user-interface to many large classes of stored information, such as reports, notes, data-bases, computer documentation and on-line systems help. We propose the implementation of a simple scheme to incorporate several different servers of machine-stored information already available at CERN, including an analysis of the requirements for information access needs by experiments... A program which provides access to the hypertext world we call a browser. ― T. Berners-Lee, R. Cailliau, 12 November 1990, CERN
In 1992, Lynx was born as an early Internet web browser. Its ability to provide hypertext links within documents that could reach into documents anywhere on the Internet began the creation of the Web on the Internet.

As new web browsers were released, traffic on the World Wide Web quickly exploded from only 500 known web servers in 1993 to over 10,000 in 1994. As a result, all previous hypertext systems were overshadowed by the success of the Web, even though it lacked many features of those earlier systems, such as integrated browsers/editors (a feature of the original WorldWideWeb browser, which was not carried over into most of the other early Web browsers).

Implementations

Besides the already mentioned Project Xanadu, Hypertext Editing System, NLS, HyperCard, and World Wide Web, there are other noteworthy early implementations of hypertext, with different feature sets: 

Hypertext Editing System (HES) IBM 2250 Display console – Brown University 1969

Academic conferences

Among the top academic conferences for new research in hypertext is the annual ACM Conference on Hypertext and Hypermedia. Although not exclusively about hypertext, the World Wide Web series of conferences, organized by IW3C2, include many papers of interest. There is a list on the Web with links to all conferences in the series.

Hypertext fiction

Hypertext writing has developed its own style of fiction, coinciding with the growth and proliferation of hypertext development software and the emergence of electronic networks. Two software programs specifically designed for literary hypertext, Storyspace and Intermedia became available in the 1990s.

On the other hand, concerning the Italian production, the hypertext s000t000d by Filippo Rosso (2002), was intended to lead the reader (with the help of a three-dimensional map) in a web page interface, and was written in HTML and PHP.

An advantage of writing a narrative using hypertext technology is that the meaning of the story can be conveyed through a sense of spatiality and perspective that is arguably unique to digitally networked environments. An author's creative use of nodes, the self-contained units of meaning in a hypertextual narrative, can play with the reader's orientation and add meaning to the text. 

One of the most successful computer games, Myst, was first written in Hypercard. The game was constructed as a series of Ages, each Age consisting of a separate Hypercard stack. The full stack of the game consists of over 2500 cards. In some ways Myst redefined interactive fiction, using puzzles and exploration as a replacement for hypertextual narrative.

Critics of hypertext claim that it inhibits the old, linear, reader experience by creating several different tracks to read on, and that this in turn contributes to a postmodernist fragmentation of worlds. In some cases, hypertext may be detrimental to the development of appealing stories (in the case of hypertext Gamebooks), where ease of linking fragments may lead to non-cohesive or incomprehensible narratives. However, they do see value in its ability to present several different views on the same subject in a simple way. This echoes the arguments of 'medium theorists' like Marshall McLuhan who look at the social and psychological impacts of the media. New media can become so dominant in public culture that they effectively create a "paradigm shift" as people have shifted their perceptions, understanding of the world, and ways of interacting with the world and each other in relation to new technologies and media. So hypertext signifies a change from linear, structured and hierarchical forms of representing and understanding the world into fractured, decentralized and changeable media based on the technological concept of hypertext links.

In the 1990s, women and feminist artists took advantage of hypertext and produced dozens of works. Linda Dement’s Cyberflesh Girlmonster a hypertext CD-ROM that incorporates images of women’s body parts and remixes them to create new monstrous yet beautiful shapes. Dr. Caitlin Fisher’s award-winning online hypertext novella “‘These Waves of Girls“ is set in three time periods of the protagonist exploring polymorphous perversity enacted in her queer identity through memory. The story is written as a reflection diary of the interconnected memories of childhood, adolescence, and adulthood. It consists of an associated multi-modal collection of nodes includes linked text, still and moving images, manipulable images, animations, and sound clips.

Forms of hypertext

There are various forms of hypertext, each of which are structured differently. Below are four of the existing forms of hypertext:
  • Axial hypertexts are the most simple in structure. They are situated along an axis in a linear style. These hypertexts have a straight path from beginning to end and are fairly easy for the reader to follow. An example of an axial hypertext is The Virtual Disappearance of Miriam.
  • Arborescent hypertexts are more complex than the axial form. They have a branching structure which resembles a tree. These hypertexts have one beginning but many possible endings. The ending that the reader finishes on depends on their decisions whilst reading the text. This is much like gamebook novels that allow readers to choose their own ending.
  • Networked hypertexts are more complex still than the two previous forms of hypertext. They consist of an interconnected system of nodes with no dominant axis of orientation. Unlike the arborescent form, networked hypertexts do not have any designated beginning or any designated endings. An example of a networked hypertext is Shelley Jackson's Patchwork Girl.
  • Layered hypertext consist of two layers of linked pages. Each layer is doubly linked sequentially and a page in the top layer is doubly linked with a corresponding page in the bottom layer. The top layer contains plain text, the bottom multimedia layer provides photos, sounds and video. In the Dutch historical novel De man met de hoed designed as layered hypertext in 2006 by Eisjen Schaaf, Pauline van de Ven, and Paul Vitányi, the structure is proposed to enhance the atmosphere of the time, to enrich the text with research and family archive material and to enable readers to insert memories of their own while preserving tension and storyline.

IFOAM - Organics International

From Wikipedia, the free encyclopedia
 
International Federation of Organic Agriculture Movements (IFOAM) - Organics International
Formation1972
TypeNGO
HeadquartersGermany Bonn, Germany
Region served
Global
Membership
710 members
Official language
English
Main organ
General Assembly
Websitewww.ifoam.bio

The International Federation of Organic Agriculture Movements (IFOAM - Organics International) is the worldwide umbrella organization for the organic agriculture movement, which represents close to 800 affiliates in 117 countries.

It declares its mission is to, "Lead, unite and assist the organic movement in its full diversity." and vision is the "worldwide adoption of ecologically, socially and economically sound systems, based on the Principles of Organic Agriculture".

Among its wide range of activities, the federation maintains an organic farming standard, and an organic accreditation and certification service.

History

Rașit Pertev, Secretary of the International Fund for Agricultural Development, was a keynote speaker at the 2014 Organic World Congress in Istanbul, Turkey.
 
IFOAM - Organics International began in Versailles, France, on November 5, 1972, during an international congress on organic agriculture organized by the French farmer organization Nature et Progrès. The late Roland Chevriot, President of Nature et Progrès, took the initiative. There were 5 founding members representing different organizations: Lady Eve Balfour representing the Soil Association of Great Britain, Kjell Arman representing the Swedish Biodynamic Association, Pauline Raphaely representing the Soil Association of South Africa, Jerome Goldstein representing Rodale Press of the United States, and Roland Chevriot representing Nature et Progrès of France.

The aim of the new organization was reflected in the name: International Federation of Organic Agriculture Movements. The founders hoped that the federation would meet what they saw as a major need: a unified, organized voice for organic food, and the diffusion and exchange of information on the principles and practices of organic agriculture across national and linguistic boundaries. In 2015 the name of changed to IFOAM - Organics International.

Structure

Linda Bullard, former President of IFOAM - Organics International), Dr. Vandana Shiva, winner of the Right Livelihood Award, and Magda Aelvoet, Belgian Minister of State and former Health and Environment Minister, celebrate the landmark decision of the European Patent Office to uphold a decision to revoke in its entirety a patent on a fungicidal product derived from seeds of the Neem, a tree indigenous to the Indian subcontinent.
 
The General Assembly of IFOAM - Organics International serves as the foundation of the organization. It elects the World Board of IFOAM - Organics International for a three-year term. The World Board is a diverse group of individuals working voluntarily to lead IFOAM - Organics International. The current World Board was elected at the General Assembly of IFOAM in Istanbul which took place in October 2014. The World Board appoints members to official committees, working groups and task forces based upon the recommendation of the membership of IFOAM - Organics International. Member organizations also establish regional groups and sector specific interest groups.

International standing

IFOAM - Organics International actively participates in international agricultural and environmental negotiations with the United Nations and multilateral institutions to further the interests of the organic agricultural movement worldwide, and has observer status or is otherwise accredited by the following international institutions:
According to the One World Trust's Global Accountability Report 2008 which assessed a range of organisations in areas such as transparency, stakeholder participation and evaluation capacity, "IFOAM is the highest scoring international NGO, and at the top of the 30 organisations this year with a score of 71 percent".

Members

Activities

IFOAM - Organics International and Standards and Certification

The Organic Guarantee System (OGS) of IFOAM - Organics International is designed to a) facilitate the development of organic standards and third-party certification worldwide and to b) provide an international guarantee of these standards and organic certification.

In recent years the OGS approach of IFOAM - Organics International underwent some significant changes. With the establishment and spreading of organic standards and certification around the world a number of new challenges appeared. Especially smallholder farmers in developing countries struggle with a) the multitude of standards they are expected to farm conform with and b) with high certification costs and considerable administrative expenditures.

IFOAM - Organics International had a breakthrough in the development and adoption of approaches to address these certification problems. The organization now directs special focus on the promotion of two new concepts:

IFOAM Family of Standards

In the framework of a multi-year collaboration IFOAM - Organics International developed together with his UN partners: the Food and Agriculture Organization (FAO) and the United Nations Conference on Trade and Development (UNCTAD), a set of standard requirements that functions as an international reference to assess the quality and equivalency of organic standards and regulations. It is known as the COROS (Common Objectives and Requirements of Organic Standards) The vision is that the Family of Standards will contain all organic standards and regulations equivalent to the COROS. Instead of assessing each standard against each other the Family of Standards can be used as a tool to simplify equivalence assessment procedures while ensuring a high level of integrity and transparency. The Family of Standard Program started in January 2011. One year later about 50 standards worldwide are approved.

Participatory Guarantee Systems PGS

Participatory Guarantee Systems are locally focused quality assurance systems. They certify producers based on active participation of stakeholders and are built on a foundation of trust, social networks and knowledge exchange” (Definition of IFOAM - Organics International, 2008). 

Participatory Guarantee Systems represent an alternative to third party certification, especially adapted to local markets and short supply chains. They can also complement third party certification with a private label that brings additional guarantees and transparency. PGS enable the direct participation of producers, consumers and other stakeholders in:
  • the choice and definition of the standards
  • the development and implementation of certification procedures
  • the certification decisions
For many organic farmers, particularly in developing countries and emerging organic markets, third party certification is often difficult to access. PGS provides an alternative option that takes some burden from the farmers and is crucially linked to local products and local markets.

Accreditation and IOAS

IFOAM - Organics International also offers organic accreditation to certification bodies. Certifiers can have their processes audited against the IFOAM Accreditation Requirements. IOAS, an IFOAM - Organics International daughter company set up in 1997, offers the IFOAM Accreditation (analyses of standards and verification process) or the Global Organic System Accreditation (analyses of verification process only) and grants special recognition of credibility. The document ISO/IEC 17011: ‘Conformity assessment – General requirements for accreditation bodies accrediting conformity assessment bodies’ lays down internationally agreed rules for how accreditation should be performed. Various national bodies verify this accreditation including the US Department of Commerce National Institute of Standards & Technology.

IFOAM - Organics International and GMOs

On October 19, 1998, participants at the 12th Scientific Conference of IFOAM - Organics International issued the Mar del Plata Declaration, where more than 600 delegates from over 60 countries voted unanimously to exclude the use of genetically modified organisms (GMOs) in food production and agriculture. From that point GMOs have been categorically excluded from organic farming.




Text of the declaration:

We, the undersigned participants at the 12th Scientific Conference of the International Federation of Organic Agriculture Movements (IFOAM) at Mar del Plata, Argentina, call on governments and regulatory agencies throughout the world to immediately ban the use of genetic engineering in agriculture and food production since it involves:
  • Unacceptable threats to human health
  • Negative and irreversible environmental impacts
  • Release of organisms of an un-recallable nature
  • Removal of the right of choice, both for farmers and consumers
  • Violation of farmers' fundamental property rights and endangerment of their economic independence
  • Practices, which are incompatible with the principles of sustainable agriculture as defined by IFOAM
Signed by: Dr. Vandana Shiva (India), Hervé La Prairie (Outgoing IFOAM - Organics International president, France), Linda Bullard (Incoming IFOAM - Organics International president, USA/Belgium), Gunnar Rundgren (Incoming IFOAM - Organics International vice-president, Sweden), Gerald Hermann (IFOAM - Organics International Treasurer, Germany), Pipo Lernoud (Conference Coordinator, Argentina), Guillermo Schnitman (MAPO President, Argentina)

IFOAM - Organics International and training

Organic agriculture can contribute to meaningful socio-economic and ecologically sustainable development, especially in poorer countries. On one hand, this is due to the application of organic principles, which means efficient management of local resources (e.g., local seed varieties, manure, etc.) and therefore cost-effectiveness. On the other hand, the market for organic products – at local and international level – has tremendous growth prospects and offers creative producers and exporters in the South excellent opportunities to improve their income and living conditions. IFOAM - Organics International is therefore active to give special support to the development of the Organic Agriculture Sector in Developing Countries through several means. Organic Agriculture is a very knowledge intensive production system. Therefore, capacity building efforts play a central role in this regard. There are many efforts all around the world regarding the development of training material and the organization of training courses related to Organic Agriculture. Existing knowledge is still scattered and not easy accessible. Especially in Developing Countries this situation remains an important constraint for the growth of the organic sector.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...