Search This Blog

Friday, July 17, 2020

Gestalt psychology

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Gestalt_psychology
Gestalt psychology or gestaltism is a school of psychology that emerged in Austria and Germany in the early twentieth century based on work by Max Wertheimer, Wolfgang Köhler, and Kurt Koffka. As used in Gestalt psychology, the German word Gestalt, meaning "form") is interpreted as "pattern" or "configuration". Gestalt psychologists emphasized that organisms perceive entire patterns or configurations, not merely individual components. The view is sometimes summarized using the adage, "the whole is more than the sum of its parts." Gestalt principles, proximity, similarity, figure-ground, continuity, closure, and connection, determine how humans perceive visuals in connection with different objects and environments.

Origin and history

Max Wertheimer (1880–1943), Kurt Koffka (1886–1941), and Wolfgang Köhler (1887–1967) founded Gestalt psychology in the early 20th century. The dominant view in psychology at the time was structuralism, exemplified by the work of Hermann von Helmholtz (1821–1894), Wilhelm Wundt (1832–1920), and Edward B. Titchener (1867–1927). Structuralism was rooted firmly in British empiricism and was based on three closely interrelated theories: (1) "atomism," also known as "elementalism," the view that all knowledge, even complex abstract ideas, is built from simple, elementary constituents, (2) "sensationalism," the view that the simplest constituents—the atoms of thought—are elementary sense impressions, and (3) "associationism," the view that more complex ideas arise from the association of simpler ideas. Together, these three theories give rise to the view that the mind constructs all perceptions and even abstract thoughts strictly from lower-level sensations that are related solely by being associated closely in space and time. The Gestaltists took issue with this widespread "atomistic" view that the aim of psychology should be to break consciousness down into putative basic elements. In contrast, the Gestalt psychologists believed that breaking psychological phenomena down into smaller parts would not lead to understanding psychology. The Gestalt psychologists believed, instead, that the most fruitful way to view psychological phenomena is as organized, structured wholes. They argued that the psychological "whole" has priority and that the "parts" are defined by the structure of the whole, rather than vice versa. One could say that the approach was based on a macroscopic view of psychology rather than a microscopic approach. Gestalt theories of perception are based on human nature being inclined to understand objects as an entire structure rather than the sum of its parts.

Wertheimer had been a student of Austrian philosopher, Christian von Ehrenfels (1859–1932), a member of the School of Brentano. Von Ehrenfels introduced the concept of Gestalt to philosophy and psychology in 1890, before the advent of Gestalt psychology as such. Von Ehrenfels observed that a perceptual experience, such as perceiving a melody or a shape, is more than the sum of its sensory components. He claimed that, in addition to the sensory elements of the perception, there is something extra. Although in some sense derived from the organization of the component sensory elements, this further quality is an element in its own right. He called it Gestalt-qualität or "form-quality." For instance, when one hears a melody, one hears the notes plus something in addition to them that binds them together into a tune – the Gestalt-qualität. It is this Gestalt-qualität that, according to von Ehrenfels, allows a tune to be transposed to a new key, using completely different notes, while still retaining its identity. The idea of a Gestalt-qualität has roots in theories by David Hume, Johann Wolfgang von Goethe, Immanuel Kant, David Hartley, and Ernst Mach. Both von Ehrenfels and Edmund Husserl seem to have been inspired by Mach's work Beiträge zur Analyse der Empfindungen (Contributions to the Analysis of Sensations, 1886), in formulating their very similar concepts of gestalt and figural moment, respectively.

By 1914, the first published references to Gestalt theory could be found in a footnote of Gabriele von Wartensleben's application of Gestalt theory to personality. She was a student at Frankfurt Academy for Social Sciences, who interacted deeply with Wertheimer and Köhler.

Through a series of experiments, Wertheimer discovered that a person observing a pair of alternating bars of light can, under the right conditions, experience the illusion of movement between one location and the other. He noted that this was a perception of motion absent any moving object. That is, it was pure phenomenal motion. He dubbed it phi ("phenomenal") motion. Wertheimer's publication of these results in 1912 marks the beginning of Gestalt psychology. In comparison to von Ehrenfels and others who had used the term "gestalt" earlier in various ways, Wertheimer's unique contribution was to insist that the "gestalt" is perceptually primary. The gestalt defines the parts from which it is composed, rather than being a secondary quality that emerges from those parts. Wertheimer took the more radical position that "what is given me by the melody does not arise ... as a secondary process from the sum of the pieces as such. Instead, what takes place in each single part already depends upon what the whole is", (1925/1938). In other words, one hears the melody first and only then may perceptually divide it up into notes. Similarly, in vision, one sees the form of the circle first—it is given "im-mediately" (i.e., its apprehension is not mediated by a process of part-summation). Only after this primary apprehension might one notice that it is made up of lines or dots or stars.

The two men who served as Wertheimer's subjects in the phi experiments were Köhler and Koffka. Köhler was an expert in physical acoustics, having studied under physicist Max Planck (1858–1947), but had taken his degree in psychology under Carl Stumpf (1848–1936). Koffka was also a student of Stumpf's, having studied movement phenomena and psychological aspects of rhythm. In 1917, Köhler (1917/1925) published the results of four years of research on learning in chimpanzees. Köhler showed, contrary to the claims of most other learning theorists, that animals can learn by "sudden insight" into the "structure" of a problem, over and above the associative and incremental manner of learning that Ivan Pavlov (1849–1936) and Edward Lee Thorndike (1874–1949) had demonstrated with dogs and cats, respectively.

The terms "structure" and "organization" were focal for the Gestalt psychologists. Stimuli were said to have a certain structure, to be organized in a certain way, and that it is to this structural organization, rather than to individual sensory elements, that the organism responds. When an animal is conditioned, it does not simply respond to the absolute properties of a stimulus, but to its properties relative to its surroundings. To use a favorite example of Köhler's, if conditioned to respond in a certain way to the lighter of two gray cards, the animal generalizes the relation between the two stimuli rather than the absolute properties of the conditioned stimulus: it will respond to the lighter of two cards in subsequent trials even if the darker card in the test trial is of the same intensity as the lighter one in the original training trials.

In 1921, Koffka published a Gestalt-oriented text on developmental psychology, Growth of the Mind. With the help of American psychologist Robert Ogden, Koffka introduced the Gestalt point of view to an American audience in 1922 by way of a paper in Psychological Bulletin. It contains criticisms of then-current explanations of a number of problems of perception, and the alternatives offered by the Gestalt school. Koffka moved to the United States in 1924, eventually settling at Smith College in 1927. In 1935, Koffka published his Principles of Gestalt Psychology. This textbook laid out the Gestalt vision of the scientific enterprise as a whole. Science, he said, is not the simple accumulation of facts. What makes research scientific is the incorporation of facts into a theoretical structure. The goal of the Gestaltists was to integrate the facts of inanimate nature, life, and mind into a single scientific structure. This meant that science would have to accommodate not only what Koffka called the quantitative facts of physical science but the facts of two other "scientific categories": questions of order and questions of Sinn, a German word which has been variously translated as significance, value, and meaning. Without incorporating the meaning of experience and behavior, Koffka believed that science would doom itself to trivialities in its investigation of human beings.

Having survived the Nazis up to the mid-1930s, all the core members of the Gestalt movement were forced out of Germany to the United States by 1935. Köhler published another book, Dynamics in Psychology, in 1940 but thereafter the Gestalt movement suffered a series of setbacks. Koffka died in 1941 and Wertheimer in 1943. Wertheimer's long-awaited book on mathematical problem-solving, Productive Thinking, was published posthumously in 1945, but Köhler was left to guide the movement without his two long-time colleagues.

Gestalt therapy

Gestalt psychology should not be confused with the Gestalt therapy, which is only peripherally linked to Gestalt psychology. The founders of Gestalt therapy, Fritz and Laura Perls, had worked with Kurt Goldstein, a neurologist who had applied principles of Gestalt psychology to the functioning of the organism. Laura Perls had been a Gestalt psychologist before she became a psychoanalyst and before she began developing Gestalt therapy together with Fritz Perls. The extent to which Gestalt psychology influenced Gestalt therapy is disputed, however. In any case it is not identical with Gestalt psychology. On the one hand, Laura Perls preferred not to use the term "Gestalt" to name the emerging new therapy, because she thought that the Gestalt psychologists would object to it; on the other hand Fritz and Laura Perls clearly adopted some of Goldstein's work. Thus, though recognizing the historical connection and the influence, most Gestalt psychologists emphasize that Gestalt therapy is not a form of Gestalt psychology.

Mary Henle noted in her presidential address to Division 24 at the meeting of the American Psychological Association (1975): "What Perls has done has been to take a few terms from Gestalt psychology, stretch their meaning beyond recognition, mix them with notions—often unclear and often incompatible—from the depth psychologies, existentialism, and common sense, and he has called the whole mixture gestalt therapy. His work has no substantive relation to scientific Gestalt psychology. To use his own language, Fritz Perls has done 'his thing'; whatever it is, it is not Gestalt psychology" With her analysis however, she restricts herself explicitly to only three of Perls' books from 1969 and 1972, leaving out Perls' earlier work, and Gestalt therapy in general as a psychotherapy method.

There have been clinical applications of Gestalt psychology in the psychotherapeutic field long before Perls'ian Gestalt therapy, in group psychoanalysis (Foulkes), Adlerian individual psychology, by Gestalt psychologists in psychotherapy like Erwin Levy, Abraham S. Luchins, by Gestalt psychologically oriented psychoanalysts in Italy (Canestrari and others), and there have been newer developments foremost in Europe. For example, a strictly Gestalt psychology-based therapeutic method is Gestalt Theoretical Psychotherapy, developed by the German Gestalt psychologist and psychotherapist Hans-Jürgen Walter and his colleagues in Germany, Austria (Gerhard Stemberger and colleagues) and Switzerland. Other countries, especially Italy, have seen similar developments.

Contributions

Gestalt psychology made many contributions to the body of psychology. The Gestaltists were the first to demonstrate empirically and document many facts about perception—including facts about the perception of movement, the perception of contour, perceptual constancy, and perceptual illusions. Wertheimer's discovery of the phi phenomenon is one example of such a contribution. In addition to discovering perceptual phenomena, the contributions of Gestalt psychology include: (a) a unique theoretical framework and methodology, (b) a set of perceptual principles, (c) a well-known set of perceptual grouping laws, (d) a theory of problem solving based on insight, and (e) a theory of memory. The following subsections discuss these contributions in turn.

Theoretical framework and methodology

The Gestalt psychologists practiced a set of theoretical and methodological principles that attempted to redefine the approach to psychological research. This is in contrast to investigations developed at the beginning of the 20th century, based on traditional scientific methodology, which divided the object of study into a set of elements that could be analyzed separately with the objective of reducing the complexity of this object.

The theoretical principles are the following:
  • Principle of Totality—Conscious experience must be considered globally (by taking into account all the physical and mental aspects of the individual simultaneously) because the nature of the mind demands that each component be considered as part of a system of dynamic relationships. Wertheimer described holism as fundamental to Gestalt psychology, writing "There are wholes, the behavior of which is not determined by that of their individual elements, but where the part-processes are themselves determined by the intrinsic nature of the whole." In other words, a perceptual whole is different from what one would predict based on only its individual parts. Moreover, the nature of a part depends upon the whole in which it is embedded. Köhler, for example, writes "In psychology...we have wholes which, instead of being the sum of parts existing independently, give their parts specific functions or properties that can only be defined in relation to the whole in question." Thus, the maxim that the whole is more than the sum of its parts is not a precise description of the Gestaltist view. Rather, "The whole is something else than the sum of its parts, because summing is a meaningless procedure, whereas the whole-part relationship is meaningful."
Based on the principles above the following methodological principles are defined:
  • Phenomenon experimental analysis—In relation to the Totality Principle any psychological research should take phenomena as a starting point and not be solely focused on sensory qualities.
  • Biotic experiment—The Gestalt psychologists established a need to conduct real experiments that sharply contrasted with and opposed classic laboratory experiments. This signified experimenting in natural situations, developed in real conditions, in which it would be possible to reproduce, with higher fidelity, what would be habitual for a subject.

Properties

The key principles of gestalt systems are emergence, reification, multistability and invariance.

Reification

Reification

Reification is the constructive or generative aspect of perception, by which the experienced percept contains more explicit spatial information than the sensory stimulus on which it is based.
For instance, a triangle is perceived in picture A, though no triangle is there. In pictures B and D the eye recognizes disparate shapes as "belonging" to a single shape, in C a complete three-dimensional shape is seen, where in actuality no such thing is drawn.
Reification can be explained by progress in the study of illusory contours, which are treated by the visual system as "real" contours.

Multistability

the Necker cube and the Rubin vase, two examples of multistability

Multistability (or multistable perception) is the tendency of ambiguous perceptual experiences to pop back and forth unstably between two or more alternative interpretations. This is seen, for example, in the Necker cube and Rubin's Figure/Vase illusion shown here. Other examples include the three-legged blivet and artist M. C. Escher's artwork and the appearance of flashing marquee lights moving first one direction and then suddenly the other. Again, Gestalt psychology does not explain how images appear multistable, only that they do.

Invariance

Invariance

Invariance is the property of perception whereby simple geometrical objects are recognized independent of rotation, translation, and scale; as well as several other variations such as elastic deformations, different lighting, and different component features. For example, the objects in A in the figure are all immediately recognized as the same basic shape, which are immediately distinguishable from the forms in B. They are even recognized despite perspective and elastic deformations as in C, and when depicted using different graphic elements as in D. Computational theories of vision, such as those by David Marr, have provided alternate explanations of how perceived objects are classified.
Emergence, reification, multistability, and invariance are not necessarily separable modules to model individually, but they could be different aspects of a single unified dynamic mechanism.

Figure-Ground Organization

The perceptual field (what an organism perceives) is organized. Figure-ground organization is one form of perceptual organization. Figure-ground organization is the interpretation of perceptual elements in terms of their shapes and relative locations in the layout of surfaces in the 3-D world. Figure-ground organization structures the perceptual field into a figure (standing out at the front of the perceptual field) and a background (receding behind the figure). Pioneering work on figure-ground organization was carried out by the Danish psychologist Edgar Rubin. The Gestalt psychologists demonstrated that we tend to perceive as figures those parts of our perceptual fields that are convex, symmetric, small, and enclosed.

Prägnanz

Like figure-ground organization, perceptual grouping (sometimes called perceptual segregation) is a form of perceptual organization. Organisms perceive some parts of their perceptual fields as "hanging together" more tightly than others. They use this information for object detection. Perceptual grouping is the process that determines what these "pieces" of the perceptual field are.
The Gestaltists were the first psychologists to systematically study perceptual grouping. According to Gestalt psychologists, the fundamental principle of perceptual grouping is the law of Prägnanz. (The law of Prägnanz is also known as the law of good Gestalt.) Prägnanz is a German word that directly translates to "pithiness" and implies salience, conciseness, and orderliness. The law of Prägnanz says that we tend to experience things as regular, orderly, symmetrical, and simple. As Koffka put it, "Of several geometrically possible organizations that one will actually occur which possesses the best, simplest and most stable shape."
The law of Prägnanz implies that, as individuals perceive the world, they eliminate complexity and unfamiliarity so they can observe reality in its most simplistic form. Eliminating extraneous stimuli helps the mind create meaning. This meaning created by perception implies a global regularity, which is often mentally prioritized over spatial relations. The law of good Gestalt focuses on the idea of conciseness, which is what all of Gestalt theory is based on.
A major aspect of Gestalt psychology is that it implies that the mind understands external stimuli as wholes rather than as the sums of their parts. The wholes are structured and organized using grouping laws.
Gestalt psychologists attempted to discover refinements of the law of Prägnanz, and this involved writing down laws that, hypothetically, allow us to predict the interpretation of sensation, what are often called "gestalt laws". Wertheimer defined a few principles that explain the ways humans perceive objects. Those principles were based on similarity, proximity, continuity. The Gestalt concept is based on perceiving reality in its simplest form. The various laws are called laws or principles, depending on the paper where they appear—but for simplicity's sake, this article uses the term laws. These laws took several forms, such as the grouping of similar, or proximate, objects together, within this global process. These laws deal with the sensory modality of vision. However, there are analogous laws for other sensory modalities including auditory, tactile, gustatory and olfactory (Bregman – GP). The visual Gestalt principles of grouping were introduced in Wertheimer (1923). Through the 1930s and '40s Wertheimer, Kohler and Koffka formulated many of the laws of grouping through the study of visual perception.

Law of Proximity

Law of proximity

The law of proximity states that when an individual perceives an assortment of objects, they perceive objects that are close to each other as forming a group. For example, in the figure illustrating the law of proximity, there are 72 circles, but we perceive the collection of circles in groups. Specifically, we perceive that there is a group of 36 circles on the left side of the image, and three groups of 12 circles on the right side of the image. This law is often used in advertising logos to emphasize which aspects of events are associated.

Law of Similarity

Law of similarity

The law of similarity states that elements within an assortment of objects are perceptually grouped together if they are similar to each other. This similarity can occur in the form of shape, colour, shading or other qualities. For example, the figure illustrating the law of similarity portrays 36 circles all equal distance apart from one another forming a square. In this depiction, 18 of the circles are shaded dark, and 18 of the circles are shaded light. We perceive the dark circles as grouped together and the light circles as grouped together, forming six horizontal lines within the square of circles. This perception of lines is due to the law of similarity.

Law of Closure

Law of closure

Gestalt psychologists believed that humans tend to perceive objects as complete rather than focusing on the gaps that the object might contain. For example, a circle has good Gestalt in terms of completeness. However, we will also perceive an incomplete circle as a complete circle. That tendency to complete shapes and figures is called closure. The law of closure states that individuals perceive objects such as shapes, letters, pictures, etc., as being whole when they are not complete. Specifically, when parts of a whole picture are missing, our perception fills in the visual gap. Research shows that the reason the mind completes a regular figure that is not perceived through sensation is to increase the regularity of surrounding stimuli. For example, the figure that depicts the law of closure portrays what we perceive as a circle on the left side of the image and a rectangle on the right side of the image. However, gaps are present in the shapes. If the law of closure did not exist, the image would depict an assortment of different lines with different lengths, rotations, and curvatures—but with the law of closure, we perceptually combine the lines into whole shapes.

Law of Symmetry

Law of symmetry

The law of symmetry states that the mind perceives objects as being symmetrical and forming around a center point. It is perceptually pleasing to divide objects into an even number of symmetrical parts. Therefore, when two symmetrical elements are unconnected the mind perceptually connects them to form a coherent shape. Similarities between symmetrical objects increase the likelihood that objects are grouped to form a combined symmetrical object. For example, the figure depicting the law of symmetry shows a configuration of square and curled brackets. When the image is perceived, we tend to observe three pairs of symmetrical brackets rather than six individual brackets.

Law of Common Fate

The law of common fate states that objects are perceived as lines that move along the smoothest path. Experiments using the visual sensory modality found that movement of elements of an object produce paths that individuals perceive that the objects are on. We perceive elements of objects to have trends of motion, which indicate the path that the object is on. The law of continuity implies the grouping together of objects that have the same trend of motion and are therefore on the same path. For example, if there are an array of dots and half the dots are moving upward while the other half are moving downward, we would perceive the upward moving dots and the downward moving dots as two distinct units.

Law of Continuity

Law of continuity

The law of continuity (also known as the law of good continuation) states that elements of objects tend to be grouped together, and therefore integrated into perceptual wholes if they are aligned within an object. In cases where there is an intersection between objects, individuals tend to perceive the two objects as two single uninterrupted entities. Stimuli remain distinct even with overlap. We are less likely to group elements with sharp abrupt directional changes as being one object.

Law of Past Experience

The law of past experience implies that under some circumstances visual stimuli are categorized according to past experience. If two objects tend to be observed within close proximity, or small temporal intervals, the objects are more likely to be perceived together. For example, the English language contains 26 letters that are grouped to form words using a set of rules. If an individual reads an English word they have never seen, they use the law of past experience to interpret the letters "L" and "I" as two letters beside each other, rather than using the law of closure to combine the letters and interpret the object as an uppercase U.

Music

An example of the Gestalt movement in effect, as it is both a process and result, is a music sequence. People are able to recognise a sequence of perhaps six or seven notes, despite them being transposed into a different tuning or key.

Problem solving and insight

Gestalt psychology contributed to the scientific study of problem solving. In fact, the early experimental work of the Gestaltists in Germany marks the beginning of the scientific study of problem solving. Later this experimental work continued through the 1960s and early 1970s with research conducted on relatively simple (but novel for participants) laboratory tasks of problem solving.
Given Gestalt psychology's focus on the whole, it was natural for Gestalt psychologists to study problem solving from the perspective of insight, seeking to understand the process by which organisms sometimes suddenly transition from having no idea how to solve a problem to instantly understanding the whole problem and its solution. In a famous set of experiments, Köhler gave chimpanzees some boxes and placed food high off the ground; after some time, the chimpanzees appeared to suddenly realize that they could stack the boxes on top of each other to reach the food.
Max Wertheimer distinguished two kinds of thinking: productive thinking and reproductive thinking. Productive thinking is solving a problem based on insight—a quick, creative, unplanned response to situations and environmental interaction. Reproductive thinking is solving a problem deliberately based on previous experience and knowledge. Reproductive thinking proceeds algorithmically—a problem solver reproduces a series of steps from memory, knowing that they will lead to a solution—or by trial and error.
Karl Duncker, another Gestalt psychologist who studied problem solving, coined the term functional fixedness for describing the difficulties in both visual perception and problem solving that arise from the fact that one element of a whole situation already has a (fixed) function that has to be changed in order to perceive something or find the solution to a problem.
Abraham Luchins also studied problem solving from the perspective of Gestalt psychology. He is well known for his research on the role of mental set (Einstellung effect), which he demonstrated using a series of problems having to do with refilling water jars.
Another Gestalt psychologist, Perkins, believes insight deals with three processes:
  1. Unconscious leap in thinking.
  2. The increased amount of speed in mental processing.
  3. The amount of short-circuiting that occurs in normal reasoning.
Views going against the Gestalt psychology are:
  1. Nothing-special view
  2. Neo-gestalt view
  3. The Three-Process View

Fuzzy-trace theory of memory

Fuzzy-trace theory, a dual process model of memory and reasoning, was also derived from Gestalt psychology. Fuzzy-trace theory posits that we encode information into two separate traces: verbatim and gist. Information stored in verbatim is exact memory for detail (the individual parts of a pattern, for example) while information stored in gist is semantic and conceptual (what we perceive the pattern to be). The effects seen in Gestalt psychology can be attributed to the way we encode information as gist.

Legacy

Gestalt psychology struggled to precisely define terms like Prägnanz, to make specific behavioral predictions, and to articulate testable models of underlying neural mechanisms. It was criticized as being merely descriptive. These shortcomings led, by the mid-20th century, to growing dissatisfaction with Gestaltism and a subsequent decline in its impact on psychology. Despite this decline, Gestalt psychology has formed the basis of much further research into the perception of patterns and objects and of research into behavior, thinking, problem solving and psychopathology.

Support from cybernetics and neurology

In the 1940s and 1950s, laboratory research in neurology and what became known as cybernetics on the mechanism of frogs' eyes indicate that perception of 'gestalts' (in particular gestalts in motion) is perhaps more primitive and fundamental than 'seeing' as such:
A frog hunts on land by vision... He has no fovea, or region of greatest acuity in vision, upon which he must center a part of the image... The frog does not seem to see or, at any rate, is not concerned with the detail of stationary parts of the world around him. He will starve to death surrounded by food if it is not moving. His choice of food is determined only by size and movement. He will leap to capture any object the size of an insect or worm, providing it moves like one. He can be fooled easily not only by a piece of dangled meat but by any moving small object... He does remember a moving thing provided it stays within his field of vision and he is not distracted.
The lowest-level concepts related to visual perception for a human being probably differ little from the concepts of a frog. In any case, the structure of the retina in mammals and in human beings is the same as in amphibians. The phenomenon of distortion of perception of an image stabilized on the retina gives some idea of the concepts of the subsequent levels of the hierarchy. This is a very interesting phenomenon. When a person looks at an immobile object, "fixes" it with his eyes, the eyeballs do not remain absolutely immobile; they make small involuntary movements. As a result the image of the object on the retina is constantly in motion, slowly drifting and jumping back to the point of maximum sensitivity. The image "marks time" in the vicinity of this point.

Quantum cognition modeling

Similarities between Gestalt phenomena and quantum mechanics have been pointed out by, among others, chemist Anton Amann, who commented that "similarities between Gestalt perception and quantum mechanics are on a level of a parable" yet may give useful insight nonetheless. Physicist Elio Conte and co-workers have proposed abstract, mathematical models to describe the time dynamics of cognitive associations with mathematical tools borrowed from quantum mechanics and has discussed psychology experiments in this context. A similar approach has been suggested by physicists David Bohm, Basil Hiley and philosopher Paavo Pylkkänen with the notion that mind and matter both emerge from an "implicate order". The models involve non-commutative mathematics; such models account for situations in which the outcome of two measurements performed one after the other can depend on the order in which they are performed—a pertinent feature for psychological processes, as an experiment performed on a conscious person may influence the outcome of a subsequent experiment by changing the state of mind of that person.

Use in contemporary social psychology

The halo effect can be explained through the application of Gestalt theories to social information processing. The constructive theories of social cognition are applied though the expectations of individuals. They have been perceived in this manner and the person judging the individual is continuing to view them in this positive manner. Gestalt's theories of perception enforces that individual's tendency to perceive actions and characteristics as a whole rather than isolated parts, therefore humans are inclined to build a coherent and consistent impression of objects and behaviors in order to achieve an acceptable shape and form. The halo effect is what forms patterns for individuals, the halo effect being classified as a cognitive bias which occurs during impression formation. The halo effect can also be altered by physical characteristics, social status and many other characteristics. As well, the halo effect can have real repercussions on the individual's perception of reality, either negatively or positively, meaning to construct negative or positive images about other individuals or situations, something that could lead to self-fulfilling prophesies, stereotyping, or even discrimination.

Contemporary cognitive and perceptual psychology

Some of the central criticisms of Gestaltism are based on the preference Gestaltists are deemed to have for theory over data, and a lack of quantitative research supporting Gestalt ideas. This is not necessarily a fair criticism as highlighted by a recent collection of quantitative research on Gestalt perception. Researchers continue to test hypotheses about the mechanisms underlying Gestalt principles such as the principle of similarity.
Other important criticisms concern the lack of definition and support for the many physiological assumptions made by gestaltists and lack of theoretical coherence in modern Gestalt psychology.
In some scholarly communities, such as cognitive psychology and computational neuroscience, gestalt theories of perception are criticized for being descriptive rather than explanatory in nature. For this reason, they are viewed by some as redundant or uninformative. For example, a textbook on visual perception states that, "The physiological theory of the gestaltists has fallen by the wayside, leaving us with a set of descriptive principles, but without a model of perceptual processing. Indeed, some of their 'laws' of perceptual organisation today sound vague and inadequate. What is meant by a 'good' or 'simple' shape, for example?"
One historian of psychology has argued that Gestalt psychologists first discovered many principles later championed by cognitive psychology, including schemas and prototypes. Another psychologist has argued that the Gestalt psychologists made a lasting contribution by showing how the study of illusions can help scientists understand essential aspects of how the visual system normally functions, not merely how it breaks down.

Use in design

The gestalt laws are used in user interface design. The laws of similarity and proximity can, for example, be used as guides for placing radio buttons. They may also be used in designing computers and software for more intuitive human use. Examples include the design and layout of a desktop's shortcuts in rows and columns.

Thursday, July 16, 2020

Drug interaction

From Wikipedia, the free encyclopedia
 
A drug interaction is a change in the action or side effects of a drug caused by concomitant administration with a food, beverage, supplement, or another drug.

There are many causes of drug interactions. For example, one drug may alter the pharmacokinetics of another. Alternatively, drug interactions may result from competition for a single receptor or signaling pathway.

The risk of a drug-drug interaction increases with the number of drugs used. Over a third (36%) of the elderly in the U.S. regularly use five or more medications or supplements, and 15% are at risk of a significant drug-drug interaction.

Pharmacodynamic interactions

When two drugs are used together, their effects can be additive (the result is what you expect when you add together the effect of each drug taken independently), synergistic (combining the drugs leads to a larger effect than expected), or antagonistic (combining the drugs leads to a smaller effect than expected). There is sometimes confusion on whether drugs are synergistic or additive, since the individual effects of each drug may vary from patient to patient. A synergistic interaction may be beneficial for patients, but may also increase the risk of overdose. 

Both synergy and antagonism can occur during different phases of the interaction between a drug, and an organism. For example, when synergy occurs at a cellular receptor level this is termed agonism, and the substances involved are termed agonists. On the other hand, in the case of antagonism, the substances involved are known as inverse agonists. The different responses of a receptor to the action of a drug has resulted in a number of classifications, such as "partial agonist", "competitive agonist" etc. These concepts have fundamental applications in the pharmacodynamics of these interactions. The proliferation of existing classifications at this level, along with the fact that the exact reaction mechanisms for many drugs are not well-understood means that it is almost impossible to offer a clear classification for these concepts. It is even possible that many authors would misapply any given classification.

Direct interactions between drugs are also possible and may occur when two drugs are mixed prior to intravenous injection. For example, mixing thiopentone and suxamethonium in the same syringe can lead to the precipitation of thiopentone.

The change in an organism's response upon administration of a drug is an important factor in pharmacodynamic interactions. These changes are extraordinarily difficult to classify given the wide variety of modes of action that exist, and the fact that many drugs can cause their effect through a number of different mechanisms. This wide diversity also means that, in all but the most obvious cases it is important to investigate, and understand these mechanisms. The well-founded suspicion exists that there are more unknown interactions than known ones. 

Effects of the competitive inhibition of an agonist by increases in the concentration of an antagonist. A drugs potency can be affected (the response curve shifted to the right) by the presence of an antagonistic interaction.pA2 known as the Schild representation, a mathematical model of the agonist:antagonist relationship or vice versa. NB: the x-axis is incorrectly labelled and should reflect the agonist concentration, not antagonist concentration.
 
Pharmacodynamic interactions can occur on:
  1. Pharmacological receptors: Receptor interactions are the most easily defined, but they are also the most common. From a pharmacodynamic perspective, two drugs can be considered to be:
    1. Homodynamic, if they act on the same receptor. They, in turn can be:
      1. Pure agonists, if they bind to the main locus of the receptor, causing a similar effect to that of the main drug.
      2. Partial agonists if, on binding to one of the receptor's secondary sites, they have the same effect as the main drug, but with a lower intensity.
      3. Antagonists, if they bind directly to the receptor's main locus but their effect is opposite to that of the main drug. These include:
        1. Competitive antagonists, if they compete with the main drug to bind with the receptor. The amount of antagonist or main drug that binds with the receptor will depend on the concentrations of each one in the plasma.
        2. Uncompetitive antagonists, when the antagonist binds to the receptor irreversibly and is not released until the receptor is saturated. In principle the quantity of antagonist and agonist that binds to the receptor will depend on their concentrations. However, the presence of the antagonist will cause the main drug to be released from the receptor regardless of the main drug's concentration, therefore all the receptors will eventually become occupied by the antagonist.
    2. Heterodynamic competitors, if they act on distinct receptors.
  2. Signal transduction mechanisms: these are molecular processes that commence after the interaction of the drug with the receptor. For example, it is known that hypoglycaemia (low blood glucose) in an organism produces a release of catecholamines, which trigger compensation mechanisms thereby increasing blood glucose levels. The release of catecholamines also triggers a series of symptoms, which allows the organism to recognise what is happening and which act as a stimulant for preventative action (eating sugars). Should a patient be taking a drug such as insulin, which reduces glycaemia, and also be taking another drug such as certain beta-blockers for heart disease, then the beta-blockers will act to block the adrenaline receptors. This will block the reaction triggered by the catecholamines should a hypoglycaemic episode occur. Therefore, the body will not adopt corrective mechanisms and there will be an increased risk of a serious reaction resulting from the ingestion of both drugs at the same time.
  3. Antagonic physiological systems: Imagine a drug A that acts on a certain organ. This effect will increase with increasing concentrations of physiological substance S in the organism. Now imagine a drug B that acts on another organ, which increases the amount of substance S. If both drugs are taken simultaneously it is possible that drug A could cause an adverse reaction in the organism as its effect will be indirectly increased by the action of drug B. An actual example of this interaction is found in the concomitant use of digoxin and furosemide. The former acts on cardiac fibres and its effect is increased if there are low levels of potassium (K) in blood plasma. Furosemide is a diuretic that lowers arterial tension but favours the loss of K+. This could lead to hypokalemia (low levels of potassium in the blood), which could increase the toxicity of digoxin.

Pharmacokinetic interactions

Modifications in the effect of a drug are caused by differences in the absorption, transport, distribution, metabolism or excretion of one or both of the drugs compared with the expected behavior of each drug when taken individually. These changes are basically modifications in the concentration of the drugs. In this respect, two drugs can be homergic if they have the same effect in the organism and heterergic if their effects are different.

Absorption interactions

Changes in motility

Some drugs, such as the prokinetic agents increase the speed with which a substance passes through the intestines. If a drug is present in the digestive tract's absorption zone for less time its blood concentration will decrease. The opposite will occur with drugs that decrease intestinal motility.
  • pH: Drugs can be present in either ionised or non-ionised form, depending on their pKa (pH at which the drug reaches equilibrium between its ionised and non-ionised form). The non-ionized forms of drugs are usually easier to absorb, because they will not be repelled by the lipidic bylayer of the cell, most of them can be absorbed by passive diffusion, unless they are too big or too polarized (like glucose or vancomycin), in which case they may have or not have specific and non specific transporters distributed on the entire intestine internal surface, that carries drugs inside the body. Obviously increasing the absorption of a drug will increase its bioavailability, so, changing the drug's state between ionized or not, can be useful or not for certain drugs.
Certain drugs require an acid stomach pH for absorption. Others require the basic pH of the intestines. Any modification in the pH could change this absorption. In the case of the antacids, an increase in pH can inhibit the absorption of other drugs such as zalcitabine (absorption can be decreased by 25%), tipranavir (25%) and amprenavir (up to 35%). However, this occurs less often than an increase in pH causes an increase in absorption. Such as occurs when cimetidine is taken with didanosine. In this case a gap of two to four hours between taking the two drugs is usually sufficient to avoid the interaction.

Transport and distribution interactions

The main interaction mechanism is competition for plasma protein transport. In these cases the drug that arrives first binds with the plasma protein, leaving the other drug dissolved in the plasma, which modifies its concentration. The organism has mechanisms to counteract these situations (by, for example, increasing plasma clearance), which means that they are not usually clinically relevant. However, these situations should be taken into account if other associated problems are present such as when the method of excretion is affected.

Metabolism interactions

Diagram of cytochrome P450 isoenzyme 2C9 with the haem group in the centre of the enzyme.
 
Many drug interactions are due to alterations in drug metabolism. Further, human drug-metabolizing enzymes are typically activated through the engagement of nuclear receptors. One notable system involved in metabolic drug interactions is the enzyme system comprising the cytochrome P450 oxidases.

CYP450

Cytochrome P450 is a very large family of haemoproteins (hemoproteins) that are characterized by their enzymatic activity and their role in the metabolism of a large number of drugs. Of the various families that are present in human beings the most interesting in this respect are the 1, 2 and 3, and the most important enzymes are CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1 and CYP3A4. The majority of the enzymes are also involved in the metabolism of endogenous substances, such as steroids or sex hormones, which is also important should there be interference with these substances. As a result of these interactions the function of the enzymes can either be stimulated (enzyme induction) or inhibited.

Enzymatic inhibition

If drug A is metabolized by a cytochrome P450 enzyme and drug B inhibits or decreases the enzyme's activity, then drug A will remain with high levels in the plasma for longer as its inactivation is slower. As a result, enzymatic inhibition will cause an increase in the drug's effect. This can cause a wide range of adverse reactions.

It is possible that this can occasionally lead to a paradoxical situation, where the enzymatic inhibition causes a decrease in the drug's effect: if the metabolism of drug A gives rise to product A2, which actually produces the effect of the drug. If the metabolism of drug A is inhibited by drug B the concentration of A2 that is present in the blood will decrease, as will the final effect of the drug.

Enzymatic induction

If drug A is metabolized by a cytochrome P450 enzyme and drug B induces or increases the enzyme's activity, then blood plasma concentrations of drug A will quickly fall as its inactivation will take place more rapidly. As a result, enzymatic induction will cause a decrease in the drug's effect.

As in the previous case, it is possible to find paradoxical situations where an active metabolite causes the drug's effect. In this case the increase in active metabolite A2 (following the previous example) produces an increase in the drug's effect.

It can often occur that a patient is taking two drugs that are enzymatic inductors, one inductor and the other inhibitor or both inhibitors, which greatly complicates the control of an individual's medication and the avoidance of possible adverse reactions.

An example of this is shown in the following table for the CYP1A2 enzyme, which is the most common enzyme found in the human liver. The table shows the substrates (drugs metabolized by this enzyme) and the inductors and inhibitors of its activity:

Enzyme CYP3A4 is the enzyme that the greatest number of drugs use as a substrate. Over 100 drugs depend on its metabolism for their activity and many others act on the enzyme as inductors or inhibitors. 

Some foods also act as inductors or inhibitors of enzymatic activity. The following table shows the most common:
Foods and their influence on drug metabolism
Food Mechanism Drugs affected
Enzymatic inductor Acenocoumarol, warfarin
Grapefruit juice Enzymatic inhibition
Soya Enzymatic inhibition Clozapine, haloperidol, olanzapine, caffeine, NSAIDs, phenytoin, zafirlukast, warfarin
Garlic Increases antiplatelet activity
Ginseng To be determined Warfarin, heparin, aspirin and NSAIDs
Ginkgo biloba Strong inhibitor of platelet aggregation factor Warfarin, aspirin and NSAIDs
Hypericum perforatum (St John's wort) Enzymatic inductor (CYP450) Warfarin, digoxin, theophylline, cyclosporine, phenytoin and antiretrovirals
Ephedra Receptor level agonist MAOI, central nervous system stimulants, alkaloids ergotamines and xanthines
Kava (Piper methysticum) Unknown Levodopa
Ginger Inhibits thromboxane synthetase (in vitro) Anticoagulants
Chamomile Unknown Benzodiazepines, barbiturates and opioids
Hawthorn Unknown Beta-adrenergic antagonists, cisapride, digoxin, quinidine
Grapefruit juice can act as an enzyme inhibitor.
 
Any study of pharmacological interactions between particular medicines should also discuss the likely interactions of some medicinal plants. The effects caused by medicinal plants should be considered in the same way as those of medicines as their interaction with the organism gives rise to a pharmacological response. Other drugs can modify this response and also the plants can give rise to changes in the effects of other active ingredients.

There is little data available regarding interactions involving medicinal plants for the following reasons:

St John's wort can act as an enzyme inductor.
  1. False sense of security regarding medicinal plants. The interaction between a medicinal plant and a drug is usually overlooked due to a belief in the "safety of medicinal plants."
  2. Variability of composition, both qualitative and quantitative. The composition of a plant-based drug is often subject to wide variations due to a number of factors such as seasonal differences in concentrations, soil type, climatic changes or the existence of different varieties or chemical races within the same plant species that have variable compositions of the active ingredient. On occasion, an interaction can be due to just one active ingredient, but this can be absent in some chemical varieties or it can be present in low concentrations, which will not cause an interaction. Counter interactions can even occur. This occurs, for instance, with ginseng, the Panax ginseng variety increases the Prothrombin time, while the Panax quinquefolius variety decreases it.
  3. Absence of use in at-risk groups, such as hospitalized and polypharmacy patients, who tend to have the majority of drug interactions.
  4. Limited consumption of medicinal plants has given rise to a lack of interest in this area.
They are usually included in the category of foods as they are usually taken as a tea or food supplement. However, medicinal plants are increasingly being taken in a manner more often associated with conventional medicines: pills, tablets, capsules, etc.

Excretion interactions

Renal excretion

Human kidney nephron.

Only the free fraction of a drug that is dissolved in the blood plasma can be removed through the kidney. Therefore, drugs that are tightly bound to proteins are not available for renal excretion, as long as they are not metabolized when they may be eliminated as metabolites. Creatinine clearance is used as a measure of kidney functioning but it is only useful in cases where the drug is excreted in an unaltered form in the urine. The excretion of drugs from the kidney's nephrons has the same properties as that of any other organic solute: passive filtration, reabsorption and active secretion. In the latter phase the secretion of drugs is an active process that is subject to conditions relating to the saturability of the transported molecule and competition between substrates. Therefore, these are key sites where interactions between drugs could occur. Filtration depends on a number of factors including the pH of the urine, it having been shown that the drugs that act as weak bases are increasingly excreted as the pH of the urine becomes more acidic, and the inverse is true for weak acids. This mechanism is of great use when treating intoxications (by making the urine more acidic or more alkali) and it is also used by some drugs and herbal products to produce their interactive effect.

Bile excretion

Bile excretion is different from kidney excretion as it always involves energy expenditure in active transport across the epithelium of the bile duct against a concentration gradient. This transport system can also be saturated if the plasma concentrations of the drug are high. Bile excretion of drugs mainly takes place where their molecular weight is greater than 300 and they contain both polar and lipophilic groups. The glucuronidation of the drug in the kidney also facilitates bile excretion. Substances with similar physicochemical properties can block the receptor, which is important in assessing interactions. A drug excreted in the bile duct can occasionally be reabsorbed by the intestines (in the enterohepatic circuit), which can also lead to interactions with other drugs.

Herb-drug interactions

Herb-drug interactions are drug interactions that occur between herbal medicines and conventional drugs. These types of interactions may be more common than drug-drug interactions because herbal medicines often contain multiple pharmacologically active ingredients, while conventional drugs typically contain only one. Some such interactions are clinically significant, although most herbal remedies are not associated with drug interactions causing serious consequences. Most herb-drug interactions are moderate in severity. The most commonly implicated conventional drugs in herb-drug interactions are warfarin, insulin, aspirin, digoxin, and ticlopidine, due to their narrow therapeutic indices. The most commonly implicated herbs involved in such interactions are those containing St. John’s Wort, magnesium, calcium, iron, or ginkgo.

Examples

Examples of herb-drug interactions include, but are not limited to:

Mechanisms

The mechanisms underlying most herb-drug interactions are not fully understood. Interactions between herbal medicines and anticancer drugs typically involve enzymes that metabolize cytochrome P450. For example, St. John's Wort has been shown to induce CYP3A4 and P-glycoprotein in vitro and in vivo.

Underlying factors

It is possible to take advantage of positive drug interactions. However, the negative interactions are usually of more interest because of their pathological significance, and also because they are often unexpected, and may even go undiagnosed. By studying the conditions that favor the appearance of interactions, it should be possible to prevent them, or at least diagnose them in time. The factors or conditions that predispose the appearance of interactions include:
  • Old age: factors relating to how human physiology changes with age may affect the interaction of drugs. For example, liver metabolism, kidney function, nerve transmission or the functioning of bone marrow all decrease with age. In addition, in old age there is a sensory decrease that increases the chances of errors being made in the administration of drugs.
  • Polypharmacy: The use of multiple drugs by a single patient, to treat one or more ailments. The more drugs a patient takes the more likely it will be that some of them will interact.
  • Genetic factors: Genes synthesize enzymes that metabolize drugs. Some races have genotypic variations that could decrease or increase the activity of these enzymes. The consequence of this would, on occasions, be a greater predisposition towards drug interactions and therefore a greater predisposition for adverse effects to occur. This is seen in genotype variations in the isozymes of cytochrome P450.
  • Hepatic or renal diseases: The blood concentrations of drugs that are metabolized in the liver and/or eliminated by the kidneys may be altered if either of these organs is not functioning correctly. If this is the case an increase in blood concentration is normally seen.
  • Serious diseases that could worsen if the dose of the medicine is reduced.
  • Drug dependent factors:
    • Narrow therapeutic index: Where the difference between the effective dose and the toxic dose is small. The drug digoxin is an example of this type of drug.
    • Steep dose-response curve: Small changes in the dosage of a drug produce large changes in the drug's concentration in the patient's blood plasma.
    • Saturable hepatic metabolism: In addition to dose effects the capacity to metabolize the drug is greatly decreased

Epidemiology

Among US adults older than 55, 4% are taking medication and or supplements that put them at risk of a major drug interaction. Potential drug-drug interactions have increased over time and are more common in the low educated elderly even after controlling for age, sex, place of residence, and comorbidity.

Universal Networking Language

From Wikipedia, the free encyclopedia
 
Universal Networking Language (UNL) is a declarative formal language specifically designed to represent semantic data extracted from natural language texts. It can be used as a pivot language in interlingual machine translation systems or as a knowledge representation language in information retrieval applications.

Scope and goals

UNL is designed to establish a simple foundation for representing the most central aspects of information and meaning in a machine- and human-language-independent form. As a language-independent formalism, UNL aims to code, store, disseminate and retrieve information independently of the original language in which it was expressed. In this sense, UNL seeks to provide tools for overcoming the language barrier in a systematic way.

At first glance, UNL seems to be a kind of interlingua, into which source texts are converted before being translated into target languages. It can, in fact, be used for this purpose, and very efficiently, too. However, its real strength is knowledge representation and its primary objective is to provide an infrastructure for handling knowledge that already exists or can exist in any given language.

Nevertheless, it is important to note that at present it would be foolish to claim to represent the “full” meaning of any word, sentence, or text for any language. Subtleties of intention and interpretation make the “full meaning,” however we might conceive it, too variable and subjective for any systematic treatment. Thus UNL avoids the pitfalls of trying to represent the “full meaning” of sentences or texts, targeting instead the “core” or “consensual” meaning most often attributed to them. In this sense, much of the subtlety of poetry, metaphor, figurative language, innuendo, and other complex, indirect communicative behaviors is beyond the current scope and goals of UNL. Instead, UNL targets direct communicative behavior and literal meaning as a tangible, concrete basis for most human communication in practical, day-to-day settings.

Structure

In the UNL approach, information conveyed by natural language is represented sentence by sentence as a hypergraph composed of a set of directed binary labeled links (referred to as relations) between nodes or hypernodes (the Universal Words, or simply UWs), which stand for concepts. UWs can also be annotated with attributes representing context information. 

As an example, the English sentence ‘The sky was blue?!’ can be represented in UNL as follows:

UNLGraph.svg

In the example above, "sky(icl>natural world)" and "blue(icl>color)", which represent individual concepts, are UWs; "aoj" (= attribute of an object) is a directed binary semantic relation linking the two UWs; and "@def", "@interrogative", "@past", "@exclamation" and "@entry" are attributes modifying UWs.

UWs are intended to represent universal concepts, but are expressed in English words or in any other natural language in order to be humanly readable. They consist of a "headword" (the UW root) and a "constraint list" (the UW suffix between parentheses), where the constraints are used to disambiguate the general concept conveyed by the headword. The set of UWs is organized in the UNL Ontology, in which high-level concepts are related to lower-level ones through the relations "icl" (= is a kind of), "iof" (= is an instance of) and "equ" (= is equal to).

Relations are intended to represent semantic links between words in every existing language. They can be ontological (such as "icl" and "iof," referred to above), logical (such as "and" and "or"), and thematic (such as "agt" = agent, "ins" = instrument, "tim" = time, "plc" = place, etc.). There are currently 46 relations in the UNL Specs. They jointly define the UNL syntax.

Attributes represent information that cannot be conveyed by UWs and relations. Normally, they represent information concerning time ("@past", "@future", etc.), reference ("@def", "@indef", etc.), modality ("@can", "@must", etc.), focus ("@topic", "@focus", etc.), and so on.

Within the UNL Program, the process of representing natural language sentences in UNL graphs is called UNLization, and the process of generating natural language sentences out of UNL graphs is called NLization. UNLization, which involves natural language analysis and understanding, is intended to be carried out semi-automatically (i.e., by humans with computer aids); and NLization is intended to be carried out fully automatically.

History

The UNL Programme started in 1996, as an initiative of the Institute of Advanced Studies of the United Nations University in Tokyo, Japan. In January 2001, the United Nations University set up an autonomous organization, the UNDL Foundation, to be responsible for the development and management of the UNL Programme. The foundation, a non-profit international organisation, has an independent identity from the United Nations University, although it has special links with the UN. It inherited from the UNU/IAS the mandate of implementing the UNL Programme so that it can fulfil its mission.

The programme has already crossed important milestones. The overall architecture of the UNL System has been developed with a set of basic software and tools necessary for its functioning. These are being tested and improved. A vast amount of linguistic resources from the various native languages already under development, as well as from the UNL expression, has been accumulated in the last few years. Moreover, the technical infrastructure for expanding these resources is already in place, thus facilitating the participation of many more languages in the UNL system from now on. A growing number of scientific papers and academic dissertations on the UNL are being published every year.

The most visible accomplishment so far is the recognition by the Patent Co-operation Treaty (PCT) of the innovative character and industrial applicability of the UNL, which was obtained in May 2002 through the World Intellectual Property Organisation (WIPO). Acquiring the patents (US patents 6,704,700 and 7,107,206) for the UNL is a completely novel achievement within the United Nations.

Heritage language

From Wikipedia, the free encyclopedia
 
A heritage language is a minority language (either immigrant or indigenous) learned by its speakers at home as children, but never fully developed because of insufficient input from the social environment: in fact, the community of speakers grows up with a dominant language in which they become more competent. Polinsky & Kagan label it as a continuum (taken from Valdés definition of heritage language) that ranges from fluent speakers to barely-speaking individuals of the home language. In some countries or cultures in which they determine one's mother tongue by the ethnic group, a heritage language would be linked to the native language.

The term can also refer to the language of a person's family or community that the person does not speak or understand, but identifies with culturally.

Definitions and use

Heritage language is a language which is predominantly spoken by "nonsocietal" groups and linguistic minorities.

In various fields, such as foreign language education and linguistics, the definitions of heritage language become more specific and divergent. In foreign language education, heritage language is defined in terms of a student's upbringing and functional proficiency in the language: a student raised in a home where a non-majority language is spoken is a heritage speaker of that language if they possess some proficiency in it. Under this definition, individuals that have some cultural connection with the language but do not speak it are not considered heritage students. This restricted definition became popular in the mid 1990s with the publication of Standards for Foreign Language Learning by the American Council on the Teaching of Foreign Languages.

Among linguists, heritage language is an end-state language that is defined based on the temporal order of acquisition and, often, the language dominance in the individual. A heritage speaker acquires the heritage language as their first language through natural input in the home environment and acquires the majority language as a second language, usually when they start school and talk about different topics with people in school, or by exposure through media (written texts, internet, popular culture etc.). As exposure to the heritage language decreases and exposure to the majority language increases, the majority language becomes the individual’s dominant language and acquisition of the heritage language changes. The results of these changes can be seen in divergence of the heritage language from monolingual norms in the areas of phonology, lexical knowledge (knowledge of vocabulary or words), morphology, syntax, semantics and code-switching, although mastery of the heritage language may vary from purely receptive skills in only informal spoken language to native-like fluency.

Controversy in definition

As stated by Polinsky and Kagan: "The definition of a heritage speaker in general and for specific languages continues to be debated. The debate is of particular significance in such languages as Chinese, Arabic, and languages of India and the Philippines, where speakers of multiple languages or dialects are seen as heritage speakers of a single standard language taught for geographic, cultural or other reasons (Mandarin Chinese, Classical Arabic, Hindi, or Tagalog, respectively)."

One idea that prevails in the literature is that "[heritage] languages include indigenous languages that are often endangered. . . as well as world languages that are commonly spoken in many other regions of the world (Spanish in the United States, Arabic in France)". However, that view is not shared universally. In Canada, for example, First Nations languages are not classified as heritage languages by some groups whereas they are so classified by others.

The label heritage is given to a language based principally on the social status of its speakers and not necessarily on any linguistic property. Thus, while Spanish typically comes in second in terms of native speakers worldwide and has official status in a number of countries, it is considered a heritage language in the English-dominant United States and Canada. Outside the United States and Canada, heritage language definitions and use vary.

Speakers of the same heritage language raised in the same community may differ significantly in terms of their language abilities, yet be considered heritage speakers under this definition. Some heritage speakers may be highly proficient in the language, possessing several registers, while other heritage speakers may be able to understand the language but not produce it. Other individuals that simply have a cultural connection with a minority language but do not speak it may consider it to be their heritage language. It is held by some that ownership does not necessarily depend on usership: “Some Aboriginal people distinguish between usership and ownership. There are even those who claim that they own a language although they only know one single word of it: its name.”

Proficiency

Heritage learners have a fluent command of the dominant language and are comfortable using it in formal setting because of their exposure to the language through formal education. Their command of the heritage language, however, varies widely. Some heritage learners may lose some fluency in the first language after they begin formal education in the dominant language. Others may use the heritage language consistently at home and with family but receive little or no formal training in the heritage language and thus may struggle with literacy skills or with using it in broader settings outside of the home. An additional factor that affects the acquisition of learners is whether they show willingness or reluctance towards learning the heritage language.

One factor that has been shown to influence the loss of fluency in the heritage language is age. Studies have shown that younger bilingual children are more susceptible to fluency loss than older bilingual children. The older the child is when the dominant language is introduced, the less likely he/she is going to lose ability in using his/her first language (the heritage language). This is because the older the child is, the more exposure and knowledge of use the child will have had with the heritage language, and thus the heritage language will remain as their primary language.

Researchers found that this phenomenon primarily deals with the memory network of an individual. Once a memory network is organized, it is difficult for the brain to reorganize information contrary to the initial information, because the previous information was processed first. This phenomenon becomes a struggle for adults who are trying to learn a different language. Once an individual has learned a language fluently, they will be heavily influenced by the grammatical rules and pronunciations of their first language they learned, while learning a new language.

An emerging effective way of measuring the proficiency of a heritage speaker is by speech rate. A study of gender restructuring in heritage Russian showed that heritage speakers fell into two groups: those who maintained the three-gender system and those who radically reanalyzed the system as a two-gender system. The heritage speakers who reanalyzed the three-gender system as a two-gender system had a strong correlation with a slower speech rate. The correlation is straightforward—lower proficiency speakers have more difficulty accessing lexical items; therefore, their speech is slowed down.

Although speech rate has been shown to be an effective way of measuring proficiency of heritage speakers, some heritage speakers are reluctant to produce any heritage language whatsoever. Lexical proficiency is an alternative method that is also effective in measuring proficiency. In a study with heritage Russian speakers, there was a strong correlation between the speaker's knowledge of lexical items (measured using a basic word list of about 200) and the speaker's control over grammatical knowledge such as agreement, temporal marking, and embedding.

Some heritage speakers explicitly study the language to gain additional proficiency. The learning trajectories of heritage speakers are markedly different from the trajectories of second language learners with little or no previous exposure to a target language. For instance, heritage learners typically show a phonological advantage over second language learners in both perception and production of the heritage language, even when their exposure to the heritage language was interrupted very early in life. Heritage speakers also tend to distinguish, rather than conflate, easily confusable sounds in the heritage language and the dominant language more reliably than second language learners. In morphosyntax as well, heritage speakers have been found to be more native-like than second language learners, although they are typically significantly different from native speakers. Many linguists frame this change in heritage language acquisition as “incomplete acquisition” or "attrition." "Incomplete acquisition," loosely defined by Montrul, is "the outcome of language acquisition that is not complete in childhood." In this incomplete acquisition, there are particular properties of the language that were not able to reach age-appropriate levels of proficiency after the dominant language has been introduced. Attrition, as defined by Montrul, is the loss of a certain property of a language after one has already mastered it with native-speaker level accuracy. These two cases of language loss have been used by Montrul and many other linguists to describe the change in heritage language acquisition. However, this is not the only viewpoint of linguists to describe heritage language acquisition.

One argument against incomplete acquisition is that the input that heritage speakers receive is different from monolinguals (the input may be affected by cross-generational attrition, among other factors), thus the comparison of heritage speakers against monolinguals is weak. This argument by Pascual and Rothman claims that the acquisition of the heritage language is therefore not incomplete, but complete and simply different from monolingual acquisition of a language. Another argument argues for a shift in focus on the result of incomplete acquisition of a heritage language to the process of heritage language acquisition. In this argument, the crucial factor in changes to heritage language acquisition is the extent to which the heritage speaker activates and processes the heritage language. This new model thus moves away from language acquisition that is dependent on the exposure to input of the language and moves towards dependence on the frequency of processing for production and comprehension of the heritage language.

Some colleges and universities offer courses prepared for speakers of heritage languages. For example, students who grow up learning some Spanish in the home may enroll in a course that will build on their Spanish abilities.

Right-to-work law

From Wikipedia, the free encyclopedia ...