Search This Blog

Thursday, November 30, 2023

Atonality

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Atonality

Ending of Schoenberg's "George Lieder" Op. 15/1 presents what would be an "extraordinary" chord in tonal music, without the harmonic-contrapuntal constraints of tonal music.Duration: 5 seconds.

Atonality in its broadest sense is music that lacks a tonal center, or key. Atonality, in this sense, usually describes compositions written from about the early 20th-century to the present day, where a hierarchy of harmonies focusing on a single, central triad is not used, and the notes of the chromatic scale function independently of one another. More narrowly, the term atonality describes music that does not conform to the system of tonal hierarchies that characterized European classical music between the seventeenth and nineteenth centuries. "The repertory of atonal music is characterized by the occurrence of pitches in novel combinations, as well as by the occurrence of familiar pitch combinations in unfamiliar environments".

The term is also occasionally used to describe music that is neither tonal nor serial, especially the pre-twelve-tone music of the Second Viennese School, principally Alban Berg, Arnold Schoenberg, and Anton Webern. However, "as a categorical label, 'atonal' generally means only that the piece is in the Western tradition and is not 'tonal'", although there are longer periods, e.g., medieval, renaissance, and modern modal music to which this definition does not apply. "Serialism arose partly as a means of organizing more coherently the relations used in the pre-serial 'free atonal' music. ... Thus, many useful and crucial insights about even strictly serial music depend only on such basic atonal theory".

Late 19th- and early 20th-century composers such as Alexander Scriabin, Claude Debussy, Béla Bartók, Paul Hindemith, Sergei Prokofiev, Igor Stravinsky, and Edgard Varèse have written music that has been described, in full or in part, as atonal.

History

While music without a tonal center had been previously written, for example Franz Liszt's Bagatelle sans tonalité of 1885, it is with the coming of the twentieth century that the term atonality began to be applied to pieces, particularly those written by Arnold Schoenberg and The Second Viennese School. The term "atonality" was coined in 1907 by Joseph Marx in a scholarly study of tonality, which was later expanded into his doctoral thesis.

Their music arose from what was described as the "crisis of tonality" between the late nineteenth century and early twentieth century in classical music. This situation had arisen over the course of the nineteenth century due to the increasing use of

ambiguous chords, improbable harmonic inflections, and more unusual melodic and rhythmic inflections than what was possible within the styles of tonal music. The distinction between the exceptional and the normal became more and more blurred. As a result, there was a "concomitant loosening" of the synthetic bonds through which tones and harmonies had been related to one another. The connections between harmonies were uncertain even on the lowest chord-to-chord level. On higher levels, long-range harmonic relationships and implications became so tenuous, that they hardly functioned at all. At best, the felt probabilities of the style system had become obscure. At worst, they were approaching a uniformity, which provided few guides for either composition or listening. 

The first phase, known as "free atonality" or "free chromaticism", involved a conscious attempt to avoid traditional diatonic harmony. Works of this period include the opera Wozzeck (1917–1922) by Alban Berg and Pierrot lunaire (1912) by Schoenberg.

The second phase, begun after World War I, was exemplified by attempts to create a systematic means of composing without tonality, most famously the method of composing with 12 tones or the twelve-tone technique. This period included Berg's Lulu and Lyric Suite, Schoenberg's Piano Concerto, his oratorio Die Jakobsleiter and numerous smaller pieces, as well as his last two string quartets. Schoenberg was the major innovator of the system. His student, Anton Webern, however, is anecdotally claimed to have begun linking dynamics and tone color to the primary row, making rows not only of pitches but of other aspects of music as well. However, actual analysis of Webern's twelve-tone works has so far failed to demonstrate the truth of this assertion. One analyst concluded, following a minute examination of the Piano Variations, op. 27, that

while the texture of this music may superficially resemble that of some serial music ... its structure does not. None of the patterns within separate nonpitch characteristics makes audible (or even numerical) sense in itself. The point is that these characteristics are still playing their traditional role of differentiation.

Twelve-tone technique, combined with the parametrization (separate organization of four aspects of music: pitch, attack character, intensity, and duration) of Olivier Messiaen, would be taken as the inspiration for serialism.

Atonality emerged as a pejorative term to condemn music in which chords were organized seemingly with no apparent coherence. In Nazi Germany, atonal music was attacked as "Bolshevik" and labeled as degenerate (Entartete Musik) along with other music produced by enemies of the Nazi regime. Many composers had their works banned by the regime, not to be played until after its collapse at the end of World War II.

After Schoenberg's death, Igor Stravinsky used the twelve-tone technique. Iannis Xenakis generated pitch sets from mathematical formulae, and also saw the expansion of tonal possibilities as part of a synthesis between the hierarchical principle and the theory of numbers, principles which have dominated music since at least the time of Parmenides.

Free atonality

The twelve-tone technique was preceded by Schoenberg's freely atonal pieces of 1908 to 1923, which, though free, often have as an "integrative element...a minute intervallic cell" that in addition to expansion may be transformed as with a tone row, and in which individual notes may "function as pivotal elements, to permit overlapping statements of a basic cell or the linking of two or more basic cells".

The twelve-tone technique was also preceded by nondodecaphonic serial composition used independently in the works of Alexander Scriabin, Igor Stravinsky, Béla Bartók, Carl Ruggles, and others. "Essentially, Schoenberg and Hauer systematized and defined for their own dodecaphonic purposes a pervasive technical feature of 'modern' musical practice, the ostinato."

Composing atonal music

Setting out to compose atonal music may seem complicated because of both the vagueness and generality of the term. Additionally George Perle explains that, "the 'free' atonality that preceded dodecaphony precludes by definition the possibility of self-consistent, generally applicable compositional procedures". However, he provides one example as a way to compose atonal pieces, a pre-twelve-tone technique piece by Anton Webern, which rigorously avoids anything that suggests tonality, to choose pitches that do not imply tonality. In other words, reverse the rules of the common practice period so that what was not allowed is required and what was required is not allowed. This is what was done by Charles Seeger in his explanation of dissonant counterpoint, which is a way to write atonal counterpoint.

Opening of Schoenberg's Klavierstück, Op. 11, No. 1, exemplifying his four procedures as listed by Kostka & Payne 1995Duration: 10 seconds.

Kostka and Payne list four procedures as operational in the atonal music of Schoenberg, all of which may be taken as negative rules. Avoidance of melodic or harmonic octaves, avoidance of traditional pitch collections such as major or minor triads, avoidance of more than three successive pitches from the same diatonic scale, and use of disjunct melodies (avoidance of conjunct melodies).

Further, Perle agrees with Oster and Katz that, "the abandonment of the concept of a root-generator of the individual chord is a radical development that renders futile any attempt at a systematic formulation of chord structure and progression in atonal music along the lines of traditional harmonic theory". Atonal compositional techniques and results "are not reducible to a set of foundational assumptions in terms of which the compositions that are collectively designated by the expression 'atonal music' can be said to represent 'a system' of composition". Equal-interval chords are often of indeterminate root, mixed-interval chords are often best characterized by their interval content, while both lend themselves to atonal contexts.

Perle also points out that structural coherence is most often achieved through operations on intervallic cells. A cell "may operate as a kind of microcosmic set of fixed intervallic content, statable either as a chord or as a melodic figure or as a combination of both. Its components may be fixed with regard to order, in which event it may be employed, like the twelve-tone set, in its literal transformations. … Individual tones may function as pivotal elements, to permit overlapping statements of a basic cell or the linking of two or more basic cells".

Regarding the post-tonal music of Perle, one theorist wrote: "While ... montages of discrete-seeming elements tend to accumulate global rhythms other than those of tonal progressions and their rhythms, there is a similarity between the two sorts of accumulates spatial and temporal relationships: a similarity consisting of generalized arching tone-centers linked together by shared background referential materials".

Another approach of composition techniques for atonal music is given by Allen Forte who developed the theory behind atonal music. Forte describes two main operations: transposition and inversion. Transposition can be seen as a rotation of t either clockwise or anti-clockwise on a circle, where each note of the chord is rotated equally. For example, if t = 2 and the chord is [0 3 6], transposition (clockwise) will be [2 5 8]. Inversion can be seen as a symmetry with respect to the axis formed by 0 and 6. If we carry on with our example [0 3 6] becomes [0 9 6].

An important characteristic are the invariants, which are the notes which stay identical after a transformation. No difference is made between the octave in which the note is played so that, for example, all Cs are equivalent, no matter the octave in which they actually occur. This is why the 12-note scale is represented by a circle. This leads us to the definition of the similarity between two chords which considers the subsets and the interval content of each chord.

Reception and legacy

Controversy over the term itself

The term "atonality" itself has been controversial. Arnold Schoenberg, whose music is generally used to define the term, was vehemently opposed to it, arguing that "The word 'atonal' could only signify something entirely inconsistent with the nature of tone... to call any relation of tones atonal is just as farfetched as it would be to designate a relation of colors aspectral or acomplementary. There is no such antithesis".

Composer and theorist Milton Babbitt also disparaged the term, saying "The works that followed, many of them now familiar, include the Five Pieces for Orchestra, Erwartung, Pierrot Lunaire, and they and a few yet to follow soon were termed 'atonal,' by I know not whom, and I prefer not to know, for in no sense does the term make sense. Not only does the music employ 'tones,' but it employs precisely the same 'tones,' the same physical materials, that music had employed for some two centuries. In all generosity, 'atonal' may have been intended as a mildly analytically derived term to suggest 'atonic' or to signify 'a-triadic tonality', but, even so there were infinitely many things the music was not".

"Atonal" developed a certain vagueness in meaning as a result of its use to describe a wide variety of compositional approaches that deviated from traditional chords and chord progressions. Attempts to solve these problems by using terms such as "pan-tonal", "non-tonal", "multi-tonal", "free-tonal" and "without tonal center" instead of "atonal" have not gained broad acceptance.

Criticism of the concept of atonality

Composer Anton Webern held that "new laws asserted themselves that made it impossible to designate a piece as being in one key or another". Composer Walter Piston, on the other hand, said that, out of long habit, whenever performers "play any little phrase they will hear it in some key—it may not be the right one, but the point is they will play it with a tonal sense. ... [T]he more I feel I know Schoenberg's music the more I believe he thought that way himself. ... And it isn't only the players; it's also the listeners. They will hear tonality in everything".

Donald Jay Grout similarly doubted whether atonality is really possible, because "any combination of sounds can be referred to a fundamental root". He defined it as a fundamentally subjective category: "atonal music is music in which the person who is using the word cannot hear tonal centers".

One difficulty is that even an otherwise "atonal" work, tonality "by assertion" is normally heard on the thematic or linear level. That is, centricity may be established through the repetition of a central pitch or from emphasis by means of instrumentation, register, rhythmic elongation, or metric accent.

Criticism of atonal music

Swiss conductor, composer, and musical philosopher Ernest Ansermet, a critic of atonal music, wrote extensively on this in the book Les fondements de la musique dans la conscience humaine (The Foundations of Music in Human Consciousness), where he argued that the classical musical language was a precondition for musical expression with its clear, harmonious structures. Ansermet argued that a tone system can only lead to a uniform perception of music if it is deduced from just a single interval. For Ansermet this interval is the fifth.

Examples

An example of atonal music would be Arnold Schoenberg’s “Pierrot Lunaire”, which is a song cycle composed in 1912. The work uses a technique called “Sprechstimme” or spoken singing, and the music is atonal, meaning that there is no clear tonal center or key. Instead, the notes of the chromatic scale function independently of each other, and the harmonies do not follow the traditional tonal hierarchy found in classical music. The result is a dissonant and jarring sound that is quite different from the harmonies found in tonal music.

Midbrain

From Wikipedia, the free encyclopedia
Midbrain
Figure shows the midbrain (A) and surrounding regions; sagittal view of one cerebellar hemisphere. B: Pons. C: Medulla. D: Spinal cord. E: Fourth ventricle. F: Arbor vitae. G: Nodule. H: Tonsil. I: Posterior lobe. J: Anterior lobe. K: Inferior colliculus. L: Superior colliculus.
 
Inferior view in which the midbrain is encircled blue.
 
Details
PronunciationUK: /ˌmɛsɛnˈsɛfəlɒn, -kɛf-/, US: /ˌmɛzənˈsɛfələn/;
Part ofBrainstem
Identifiers
Latinmesencephalon
MeSHD008636
NeuroNames462
NeuroLex IDbirnlex_1667
TA98A14.1.03.005
TA25874
FMA61993

The midbrain or mesencephalon is the rostral-most portion of the brainstem connecting the diencephalon and cerebrum with the pons. It consists of the cerebral peduncles, tegmentum, and tectum.

It is functionally associated with vision, hearing, motor control, sleep and wakefulness, arousal (alertness), and temperature regulation.

The name comes from the Greek mesos, "middle", and enkephalos, "brain".

Anatomy

The midbrain is the shortest segment of the brainstem, measuring at less than 2cm in length. It is situated mostly in the posterior cranial fossa, with its superior part extending above the tentorial notch.

Structure

Brainstem (dorsal view).
A:Thalamus B:Midbrain C:Pons
D:Medulla oblongata
7 and 8 are the colliculi.

The principal regions of the midbrain are the tectum, the cerebral aqueduct, tegmentum, and the cerebral peduncles. Rostrally the midbrain adjoins the diencephalon (thalamus, hypothalamus, etc.), while caudally it adjoins the hindbrain (pons, medulla and cerebellum). In the rostral direction, the midbrain noticeably splays laterally.

Sectioning of the midbrain is usually performed axially, at one of two levels – that of the superior colliculi, or that of the inferior colliculi. One common technique for remembering the structures of the midbrain involves visualizing these cross-sections (especially at the level of the superior colliculi) as the upside-down face of a bear, with the cerebral peduncles forming the ears, the cerebral aqueduct the mouth, and the tectum the chin; prominent features of the tegmentum form the eyes and certain sculptural shadows of the face.

Tectum

Principal connections of the tectum

The tectum (Latin for roof) is the part of the midbrain dorsal to the cerebral aqueduct. The position of the tectum is contrasted with the tegmentum, which refers to the region in front of the ventricular system, or floor of the midbrain.

It is involved in certain reflexes in response to visual or auditory stimuli. The reticulospinal tract, which exerts some control over alertness, takes input from the tectum, and travels both rostrally and caudally from it.

The corpora quadrigemina are four mounds, called colliculi, in two pairs – a superior and an inferior pair, on the surface of the tectum. The superior colliculi process some visual information, aid the decussation of several fibres of the optic nerve (some fibres remain ipsilateral), and are involved with saccadic eye movements. The tectospinal tract connects the superior colliculi to the cervical nerves of the neck, and co-ordinates head and eye movements. Each superior colliculus also sends information to the corresponding lateral geniculate nucleus, with which it is directly connected. The homologous structure to the superior colliculus in non mammalian vertebrates including fish and amphibians, is called the optic tectum; in those animals, the optic tectum integrates sensory information from the eyes and certain auditory reflexes.

The inferior colliculi – located just above the trochlear nerve – process certain auditory information. Each inferior colliculus sends information to the corresponding medial geniculate nucleus, with which it is directly connected.

Cerebral aqueduct

Ventricular system anatomy showing the cerebral aqueduct, labelled centre right.

The cerebral aqueduct is the part of the ventricular system which links the third ventricle (rostrally) with the fourth ventricle (caudally); as such it is responsible for continuing the circulation of cerebrospinal fluid. The cerebral aqueduct is a narrow channel located between the tectum and the tegmentum, and is surrounded by the periaqueductal grey, which has a role in analgesia, quiescence, and bonding. The dorsal raphe nucleus (which releases serotonin in response to certain neural activity) is located at the ventral side of the periaqueductal grey, at the level of the inferior colliculus.

The nuclei of two pairs of cranial nerves are similarly located at the ventral side of the periaqueductal grey – the pair of oculomotor nuclei (which control the eyelid, and most eye movements) is located at the level of the superior colliculus, while the pair of trochlear nuclei (which helps focus vision on more proximal objects) is located caudally to that, at the level of the inferior colliculus, immediately lateral to the dorsal raphe nucleus. The oculomotor nerve emerges from the nucleus by traversing the ventral width of the tegmentum, while the trochlear nerve emerges via the tectum, just below the inferior colliculus itself; the trochlear is the only cranial nerve to exit the brainstem dorsally. The Edinger-Westphal nucleus (which controls the shape of the lens and size of the pupil) is located between the oculomotor nucleus and the cerebral aqueduct.

Tegmentum

Cross-section of the midbrain at the level of the superior colliculus
Cross-section of the midbrain at the level of the inferior colliculus.

The midbrain tegmentum is the portion of the midbrain ventral to the cerebral aqueduct, and is much larger in size than the tectum. It communicates with the cerebellum by the superior cerebellar peduncles, which enter at the caudal end, medially, on the ventral side; the cerebellar peduncles are distinctive at the level of the inferior colliculus, where they decussate, but they dissipate more rostrally. Between these peduncles, on the ventral side, is the median raphe nucleus, which is involved in memory consolidation.

The main bulk of the tegmentum contains a complex synaptic network of neurons, primarily involved in homeostasis and reflex actions. It includes portions of the reticular formation. A number of distinct nerve tracts between other parts of the brain pass through it. The medial lemniscus – a narrow ribbon of fibres – passes through in a relatively constant axial position; at the level of the inferior colliculus it is near the lateral edge, on the ventral side, and retains a similar position rostrally (due to widening of the tegmentum towards the rostral end, the position can appears more medial). The spinothalamic tract – another ribbon-like region of fibres – are located at the lateral edge of the tegmentum; at the level of the inferior colliculus it is immediately dorsal to the medial lemiscus, but due to the rostral widening of the tegmentum, is lateral of the medial lemiscus at the level of the superior colliculus.

A prominent pair of round, reddish, regions – the red nuclei (which have a role in motor co-ordination) – are located in the rostral portion of the midbrain, somewhat medially, at the level of the superior colliculus. The rubrospinal tract emerges from the red nucleus and descends caudally, primarily heading to the cervical portion of the spine, to implement the red nuclei's decisions. The area between the red nuclei, on the ventral side – known as the ventral tegmental area – is the largest dopamine-producing area in the brain, and is heavily involved in the neural reward system. The ventral tegmental area is in contact with parts of the forebrain – the mammillary bodies (from the Diencephalon) and hypothalamus (of the diencephalon).

Cerebral peduncles

Brain anatomy – forebrain, midbrain, hindbrain.

The cerebral peduncles each form a lobe ventrally of the tegmentum, on either side of the midline. Beyond the midbrain, between the lobes, is the interpeduncular fossa, which is a cistern filled with cerebrospinal fluid.

The majority of each lobe constitutes the cerebral crus. The cerebral crus are the main tracts descending from the thalamus to caudal parts of the central nervous system; the central and medial ventral portions contain the corticobulbar and corticospinal tracts, while the remainder of each crus primarily contains tracts connecting the cortex to the pons. Older texts refer to the crus cerebri as the cerebral peduncle; however, the latter term actually covers all fibres communicating with the cerebrum (usually via the diencephalon), and therefore would include much of the tegmentum as well. The remainder of the crus pedunculi – small regions around the main cortical tracts – contain tracts from the internal capsule.

The portion of the lobes in connection with the tegmentum, except the most lateral portion, is dominated by a blackened band – the substantia nigra (literally black substance) – which is the only part of the basal ganglia system outside the forebrain. It is ventrally wider at the rostral end. By means of the basal ganglia, the substantia nigra is involved in motor-planning, learning, addiction, and other functions. There are two regions within the substantia nigra – one where neurons are densely packed (the pars compacta) and one where they are not (the pars reticulata), which serve a different role from one another within the basal ganglia system. The substantia nigra has extremely high production of melanin (hence the colour), dopamine, and noradrenalin; the loss of dopamine-producing neurons in this region contributes to the progression of Parkinson's disease.

Vasculature

Arterial supply

The midbrain is supplied by the following arteries:

Venous drainage

Venous blood from the midbrain is mostly drained into the basal vein as it passes around the peduncle. Some venous blood from the colliculi drains to the great cerebral vein.

Development

Mesencephalon of human embryo

During embryonic development, the midbrain (also known as the mesencephalon) arises from the second vesicle of the neural tube, while the interior of this portion of the tube becomes the cerebral aqueduct. Unlike the other two vesicles – the forebrain and hindbrain – the midbrain does not develop further subdivision for the remainder of neural development. It does not split into other brain areas. while the forebrain, for example, divides into the telencephalon and the diencephalon.

Throughout embryonic development, the cells within the midbrain continually multiply; this happens to a much greater extent ventrally than it does dorsally. The outward expansion compresses the still-forming cerebral aqueduct, which can result in partial or total obstruction, leading to congenital hydrocephalus. The tectum is derived in embryonic development from the alar plate of the neural tube.

Function

The mesencephalon is considered part of the brainstem. Its substantia nigra is closely associated with motor system pathways of the basal ganglia. The human mesencephalon is archipallian in origin, meaning that its general architecture is shared with the most ancient of vertebrates. Dopamine produced in the substantia nigra and ventral tegmental area plays a role in movement, movement planning, excitation, motivation and habituation of species from humans to the most elementary animals such as insects. Laboratory house mice from lines that have been selectively bred for high voluntary wheel running have enlarged midbrains. The midbrain helps to relay information for vision and hearing.

Virtual Network Computing

From Wikipedia, the free encyclopedia
Virtual Network Computing logo

Virtual Network Computing (VNC) is a graphical desktop-sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse input from one computer to another, relaying the graphical-screen updates, over a network.

VNC is platform-independent – there are clients and servers for many GUI-based operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.

VNC was originally developed at the Olivetti & Oracle Research Lab in Cambridge, United Kingdom. The original VNC source code and many modern derivatives are open source under the GNU General Public License.

VNC in KDE 3.1

There are a number of variants of VNC which offer their own particular functionality; e.g., some optimised for Microsoft Windows, or offering file transfer (not part of VNC proper), etc. Many are compatible (without their added features) with VNC proper in the sense that a viewer of one flavour can connect with a server of another; others are based on VNC code but not compatible with standard VNC.

VNC and RFB are registered trademarks of RealVNC Ltd. in the US and some other countries.

History

The Olivetti & Oracle Research Lab (ORL) at Cambridge in the UK developed VNC at a time when Olivetti and Oracle Corporation owned the lab. In 1999, AT&T acquired the lab, and in 2002 closed down the lab's research efforts.

Developers who worked on VNC while still at the AT&T Research Lab include:

Following the closure of ORL in 2002, several members of the development team (including Richardson, Harter, Weatherall and Hopper) formed RealVNC in order to continue working on open-source and commercial VNC software under that name.

The original GPLed source code has fed into several other versions of VNC. Such forking has not led to compatibility problems because the RFB protocol is designed to be extensible. VNC clients and servers negotiate their capabilities with handshaking in order to use the most appropriate options supported at both ends.

As of 2013, RealVNC Ltd claims the term "VNC" as a registered trademark in the United States and in other countries.

Etymology

The name Virtual Network Computer/Computing (VNC) originated with ORL's work on a thin client called the Videotile, which also used the RFB protocol. The Videotile had an LCD display with pen input and a fast ATM connection to the network. At the time, network computer was commonly used as a synonym for a thin client; VNC is essentially a software-only (i.e. virtual) network computer.

Operation

  • The VNC server is the program on the machine that shares some screen (and may not be related to a physical display – the server can be "headless"), and allows the client to share control of it.
  • The VNC client (or viewer) is the program that represents the screen data originating from the server, receives updates from it, and presumably controls it by informing the server of collected local input.
  • The VNC protocol (RFB protocol) is very simple, based on transmitting one graphic primitive from server to client ("Put a rectangle of pixel data at the specified X,Y position") and event messages from client to server.

In the normal method of operation a viewer connects to a port on the server (default port: 5900). Alternatively (depending on the implementation) a browser can connect to the server (default port: 5800). And a server can connect to a viewer in "listening mode" on port 5500. One advantage of listening mode is that the server site does not have to configure its firewall to allow access on port 5900 (or 5800); the duty is on the viewer, which is useful if the server site has no computer expertise and the viewer user is more knowledgeable.

The server sends small rectangles of the framebuffer to the client. In its simplest form, the VNC protocol can use a lot of bandwidth, so various methods have been devised to reduce the communication overhead. For example, there are various encodings (methods to determine the most efficient way to transfer these rectangles). The VNC protocol allows the client and server to negotiate which encoding they will use. The simplest encoding, supported by all clients and servers, is raw encoding, which sends pixel data in left-to-right scanline order, and after the original full screen has been transmitted, transfers only rectangles that change. This encoding works very well if only a small portion of the screen changes from one frame to the next (as when a mouse pointer moves across a desktop, or when text is written at the cursor), but bandwidth demands get very high if a lot of pixels change at the same time (such as when scrolling a window or viewing full-screen video).

VNC by default uses TCP port 5900+N, where N is the display number (usually :0 for a physical display). Several implementations also start a basic HTTP server on port 5800+N to provide a VNC viewer as a Java applet, allowing easy connection through any Java-enabled web-browser. Different port assignments can be used as long as both client and server are configured accordingly. A HTML5 VNC client implementation for modern browsers (no plugins required) exists too.

Although possible even on low bandwidth, using VNC over the Internet is facilitated if the user has a broadband connection at both ends. However, it may require advanced network address translation (NAT), firewall and router configuration such as port forwarding in order for the connection to go through. Users may establish communication through virtual private network (VPN) technologies to ease usage over the Internet, or as a LAN connection if VPN is used as a proxy, or through a VNC repeater (useful in presence of a NAT).

Xvnc is the Unix VNC server, which is based on a standard X server. To applications, Xvnc appears as an X "server" (i.e., it displays client windows), and to remote VNC users it is a VNC server. Applications can display themselves on Xvnc as if it were a normal X display, but they will appear on any connected VNC viewers rather than on a physical screen. Alternatively, a machine (which may be a workstation or a network server) with screen, keyboard, and mouse can be set up to boot and run the VNC server as a service or daemon, then the screen, keyboard, and mouse can be removed and the machine stored in an out-of-the way location.

In addition, the display that is served by VNC is not necessarily the same display seen by a user on the server. On Unix/Linux computers that support multiple simultaneous X11 sessions, VNC may be set to serve a particular existing X11 session, or to start one of its own. It is also possible to run multiple VNC sessions from the same computer. On Microsoft Windows the VNC session served is always the current user session.

Users commonly deploy VNC as a cross-platform remote desktop system. For example, Apple Remote Desktop for Mac OS X (and for a time, "Back to My Mac" in 'Leopard' - Mac OS X 10.5 through 'High Sierra' - macOS 10.13) interoperates with VNC and will connect to a Unix user's current desktop if it is served with x11vnc, or to a separate X11 session if one is served with TightVNC. From Unix, TightVNC will connect to a Mac OS X session served by Apple Remote Desktop if the VNC option is enabled, or to a VNC server running on Microsoft Windows.

In July 2014 RealVNC published a Wayland developer preview.

Security

By default, RFB is not a secure protocol. While passwords are not sent in plain-text (as in telnet), cracking could prove successful if both the encryption key and encoded password were sniffed from a network. For this reason it is recommended that a password of at least 8 characters be used. On the other hand, there is also an 8-character limit on some versions of VNC; if a password is sent exceeding 8 characters, the excess characters are removed and the truncated string is compared to the password.

UltraVNC supports the use of an open-source encryption plugin which encrypts the entire VNC session including password authentication and data transfer. It also allows authentication to be performed based on NTLM and Active Directory user accounts. However, use of such encryption plugins makes it incompatible with other VNC programs. RealVNC offers high-strength AES encryption as part of its commercial package, along with integration with Active Directory. Workspot released AES encryption patches for VNC. According to TightVNC, TightVNC is not secure as picture data is transmitted without encryption. To circumvent this, it should be tunneled through an SSH connection (see below).

VNC may be tunneled over an SSH or VPN connection which would add an extra security layer with stronger encryption. SSH clients are available for most platforms; SSH tunnels can be created from UNIX clients, Microsoft Windows clients, Mac clients (including Mac OS X and System 7 and up) – and many others. There are also freeware applications that create instant VPN tunnels between computers.

An additional security concern for the use of VNC is to check whether the version used requires authorization from the remote computer owner before someone takes control of their device. This will avoid the situation where the owner of the computer accessed realizes there is someone in control of their device without previous notice.

Electronic publishing

From Wikipedia, the free encyclopedia

Electronic publishing (also referred to as publishing, digital publishing, or online publishing) includes the digital publication of e-books, digital magazines, and the development of digital libraries and catalogues. It also includes the editing of books, journals, and magazines to be posted on a screen (computer, e-reader, tablet, or smartphone).

About

Electronic publishing has become common in scientific publishing where it has been argued that peer-reviewed scientific journals are in the process of being replaced by electronic publishing. It is also becoming common to distribute books, magazines, and newspapers to consumers through tablet reading devices, a market that is growing by millions each year, generated by online vendors such as Apple's iTunes bookstore, Amazon's bookstore for Kindle, and books in the Google Play Bookstore. Market research suggested that half of all magazine and newspaper circulation would be via digital delivery by the end of 2015 and that half of all reading in the United States would be done without paper by 2015.

Although distribution via the Internet (also known as online publishing or web publishing when in the form of a website) is nowadays strongly associated with electronic publishing, there are many non-network electronic publications such as encyclopedias on CD and DVD, as well as technical and reference publications relied on by mobile users and others without reliable and high-speed access to a network. Electronic publishing is also being used in the field of test-preparation in developed as well as in developing economies for student education (thus partly replacing conventional books) – for it enables content and analytics combined – for the benefit of students. The use of electronic publishing for textbooks may become more prevalent with Apple Books from Apple Inc. and Apple's negotiation with the three largest textbook suppliers in the U.S.

Electronic publishing is increasingly popular in works of fiction. Electronic publishers are able to respond quickly to changing market demand, because the companies do not have to order printed books and have them delivered. E-publishing is also making a wider range of books available, including books that customers would not find in standard book retailers, due to insufficient demand for a traditional "print run". E-publication is enabling new authors to release books that would be unlikely to be profitable for traditional publishers. While the term "electronic publishing" is primarily used in the 2010s to refer to online and web-based publishers, the term has a history of being used to describe the development of new forms of production, distribution, and user interaction in regard to computer-based production of text and other interactive media.

History

Digitization

The first digitization initiative was in 1971 by Michael S. Hart, a student at the University of Illinois at Chicago, who launched Project Gutenberg, designed to make literature more accessible to everyone, through the internet. It took a while to develop, and in 1989 there were only 10 texts that were manually recopied on computer by Michael S. Hart himself and some volunteers. But with the appearance of the Web 1.0 in 1991 and its ability to connect documents together through static pages, the project moved quickly forward. Many more volunteers helped in developing the project by giving access to public domain classics.

In the 1970s, the French National Centre for Scientific Research digitized a thousand books from diverse subjects, mostly literature but also philosophy and science, dating back to the 12th century to present times. In this way were built the foundations of a large dictionary, the Trésor de la langue française au Québec. This foundation of e-texts, named Frantext, was published on a compact disc under the brand name Discotext, and then on the worldwide web in 1998.

Mass-scale digitization

In 1974, American inventor and futurist Raymond Kurzweil developed a scanner which was equipped with an Omnifont software that enabled optical character recognition for numeric inputs. The digitization projects could then be more ambitious since the time needed for digitization decreased considerably, and digital libraries were on the rise. All over the world, e-libraries started to emerge.

The ABU (Association des Bibliophiles Universels), was a public digital library project created by the Cnam in 1993. It was the first French digital library in the network; suspended since 2002, they reproduced over a hundred texts that are still available.

In 1992, the Bibliothèque nationale de France launched a vast digitization program. The president François Mitterrand had wanted since 1988 to create a new and innovative digital library, and it was published in 1997 under the name of Gallica. In 2014, the digital library was offering 80 255 online books and over a million documents, including prints and manuscripts.

In 2003, Wikisource was launched, and the project aspired to constitute a digital and multilingual library that would be a complement to the Wikipedia project. It was originally named "Project Sourceberg", as a word play to remind the Project Gutenberg. Supported by the Wikimedia Foundation, Wikisource proposes digitized texts that have been verified by volunteers.

In December 2004, Google created Google Books, a project to digitize all the books available in the world (over 130 million books) to make them accessible online. 10 years later, 25 000 000 books, from a hundred countries and in 400 languages, are on the platform. This was possible because by that time, robotic scanners could digitize around 6 000 books per hour.

In 2008, the prototype of Europeana was launched; and by 2010, the project had been giving access to over 10 million digital objects. The Europeana library is a European catalog that offers index cards on millions of digital objects and links to their digital libraries. In the same year, HathiTrust was created to put together the contents of many university e-libraries from USA and Europe, as well as Google Books and Internet Archive. In 2016, over six millions of users had been using HathiTrust.

Electronic publishing

The first digitization projects were transferring physical content into digital content. Electronic publishing is aiming to integrate the whole process of editing and publishing (production, layout, publication) in the digital world.

Alain Mille, in the book Pratiques de l'édition numérique (edited by Michael E. Sinatra and Marcello Vitali-Rosati), says that the beginnings of Internet and the Web are the very core of electronic publishing, since they pretty much determined the biggest changes in the production and diffusion patterns. Internet has a direct effect on the publishing questions, letting creators and users go further in the traditional process (writer-editor-publishing house).

The traditional publishing, and especially the creation part, were first revolutionized by new desktop publishing softwares appearing in the 1980s, and by the text databases created for the encyclopedias and directories. At the same time the multimedia was developing quickly, combining book, audiovisual and computer science characteristics. CDs and DVDs appear, permitting the visualization of these dictionaries and encyclopedias on computers.

The arrival and democratization of Internet is slowly giving small publishing houses the opportunity to publish their books directly online. Some websites, like Amazon, let their users buy eBooks; Internet users can also find many educative platforms (free or not), encyclopedic websites like Wikipedia, and even digital magazines platforms. The eBook then becomes more and more accessible through many different supports, like the e-reader and even smartphones. The digital book had, and still has, an important impact on publishing houses and their economical models; it is still a moving domain, and they yet have to master the new ways of publishing in a digital era.

Online edition

Based on new communications practices of the web 2.0 and the new architecture of participation, online edition opens the door to a collaboration of a community to elaborate and improve contents on Internet, while also enriching reading through collective reading practices. The web 2.0 not only links documents together, as did the web 1.0, it also links people together through social media: that's why it's called the Participative (or participatory) Web.

Many tools were put in place to foster sharing and creative collective contents. One of the many is the Wikipedia encyclopedia, since it is edited, corrected and enhanced by millions of contributors. OpenStreetMap is also based on the same principle. Blogs and comment systems are also now renown as online edition and publishing, since it is possible through new interactions between the author and its readers, and can be an important method for inspiration but also for visibility.

Process

The electronic publishing process follows some aspects of the traditional paper-based publishing process but differs from traditional publishing in two ways: 1) it does not include using an offset printing press to print the final product and 2) it avoids the distribution of a physical product (e.g., paper books, paper magazines, or paper newspapers). Because the content is electronic, it may be distributed over the Internet and through electronic bookstores, and users can read the material on a range of electronic and digital devices, including desktop computers, laptops, tablet computers, smartphones or e-reader tablets. The consumer may read the published content online on a website, in an application on a tablet device, or in a PDF document on a computer. In some cases, the reader may print the content onto paper using a consumer-grade ink-jet or laser printer or via a print-on-demand system. Some users download digital content to their devices, enabling them to read the content even when their device is not connected to the Internet (e.g., on an airplane flight).

Distributing content electronically as software applications ("apps") has become popular in the 2010s, due to the rapid consumer adoption of smartphones and tablets. At first, native apps for each mobile platform were required to reach all audiences, but in an effort toward universal device compatibility, attention has turned to using HTML5 to create web apps that can run on any browser and function on many devices. The benefit of electronic publishing comes from using three attributes of digital technology: XML tags to define content,[27] style sheets to define the look of content, and metadata (data about data) to describe the content for search engines, thus helping users to find and locate the content (a common example of metadata is the information about a song's songwriter, composer, genre that is electronically encoded along with most CDs and digital audio files; this metadata makes it easier for music lovers to find the songs they are looking for). With the use of tags, style sheets, and metadata, this enables "reflowable" content that adapts to various reading devices (tablet, smartphone, e-reader, etc.) or electronic delivery methods.

Because electronic publishing often requires text mark-up (e.g., HyperText Markup Language or some other markup language) to develop online delivery methods, the traditional roles of typesetters and book designers, who created the printing set-ups for paper books, have changed. Designers of digitally published content must have a strong knowledge of mark-up languages, the variety of reading devices and computers available, and the ways in which consumers read, view or access the content. However, in the 2010s, new user friendly design software is becoming available for designers to publish content in this standard without needing to know detailed programming techniques, such as Adobe Systems' Digital Publishing Suite and Apple's iBooks Author. The most common file format is .epub, used in many e-book formats. .epub is a free and open standard available in many publishing programs. Another common format is .folio, which is used by the Adobe Digital Publishing Suite to create content for Apple's iPad tablets and apps.

Academic publishing

After an article is submitted to an academic journal for consideration, there can be a delay ranging from several months to more than two years before it is published in a journal, rendering journals a less than ideal format for disseminating current research. In some fields, such as astronomy and some areas of physics, the role of the journal in disseminating the latest research has largely been replaced by preprint repositories such as arXiv.org. However, scholarly journals still play an important role in quality control and establishing scientific credit. In many instances, the electronic materials uploaded to preprint repositories are still intended for eventual publication in a peer-reviewed journal. There is statistical evidence that electronic publishing provides wider dissemination, because when a journal is available online, a larger number of researchers can access the journal. Even if a professor is working in a university that does not have a certain journal in its library, she may still be able to access the journal online. A number of journals have, while retaining their longstanding peer review process to ensure that the research is done properly, established electronic versions or even moved entirely to electronic publication.

Copyright

In the early 2000s, many of the existing copyright laws were designed around printed books, magazines and newspapers. For example, copyright laws often set limits on how much of a book can be mechanically reproduced or copied. Electronic publishing raises new questions in relation to copyright, because if an e-book or e-journal is available online, millions of Internet users may be able to view a single electronic copy of the document, without any "copies" being made.

Emerging evidence suggests that e-publishing may be more collaborative than traditional paper-based publishing; e-publishing often involves more than one author, and the resulting works are more accessible, since they are published online. At the same time, the availability of published material online opens more doors for plagiarism, unauthorized use, or re-use of the material. Some publishers are trying to address these concerns. For example, in 2011, HarperCollins limited the number of times that one of its e-books could be lent in a public library. Other publishers, such as Penguin, are attempting to incorporate e-book elements into their regular paper publications.

Examples

Electronic versions of traditional media

New media

Business models

  • Digital distribution
  • Online advertising
  • Open access (publishing)
  • Pay-per-view
  • Print on demand
  • Self-publishing
  • Subscriptions
  • Non-subsidy publishing
  • Representation of a Lie group

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...