Search This Blog

Thursday, July 7, 2022

Tabula rasa

From Wikipedia, the free encyclopedia

Roman tabula or wax tablet with stylus

Tabula rasa (/ˈtæbjələ ˈrɑːsə, -zə, ˈr-/; "blank slate") is the theory that individuals are born without built-in mental content, and therefore all knowledge comes from experience or perception. Epistemological proponents of tabula rasa disagree with the doctrine of innatism, which holds that the mind is born already in possession of certain knowledge. Proponents of the tabula rasa theory also favour the "nurture" side of the nature versus nurture debate when it comes to aspects of one's personality, social and emotional behaviour, knowledge, and sapience.

Etymology

Tabula rasa is a Latin phrase often translated as clean slate in English and originates from the Roman tabula, a wax-covered tablet used for notes, which was blanked (rasa) by heating the wax and then smoothing it. This roughly equates to the English term "blank slate" (or, more literally, "erased slate") which refers to the emptiness of a slate prior to it being written on with chalk. Both may be renewed repeatedly, by melting the wax of the tablet or by erasing the chalk on the slate.

Philosophy

Ancient Greek philosophy

In Western philosophy, the concept of tabula rasa can be traced back to the writings of Aristotle who writes in his treatise De Anima (Περί Ψυχῆς, 'On the Soul') of the "unscribed tablet." In one of the more well-known passages of this treatise, he writes that:

Haven't we already disposed of the difficulty about interaction involving a common element, when we said that mind is in a sense potentially whatever is thinkable, though actually it is nothing until it has thought? What it thinks must be in it just as characters may be said to be on a writing-tablet on which as yet nothing stands written: this is exactly what happens with mind.

This idea was further evolved in Ancient Greek philosophy by the Stoic school. Stoic epistemology emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon." Diogenes Laërtius attributes a similar belief to the Stoic Zeno of Citium when he writes in Lives and Opinions of Eminent Philosophers that:

Perception, again, is an impression produced on the mind, its name being appropriately borrowed from impressions on wax made by a seal; and perception they divide into, comprehensible and incomprehensible: Comprehensible, which they call the criterion of facts, and which is produced by a real object, and is, therefore, at the same time conformable to that object; Incomprehensible, which has no relation to any real object, or else, if it has any such relation, does not correspond to it, being but a vague and indistinct representation.

Avicenna (11th century)

In the 11th century, the theory of tabula rasa was developed more clearly by Avicenna. He argued that the "human intellect at birth resembled a tabula rasa, a pure potentiality that is actualized through education and comes to know." Thus, according to Avicenna, knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts," which develops through a "syllogistic method of reasoning; observations lead to propositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "possesses levels of development from the static/material intellect, that potentiality can acquire knowledge to the active intellect, the state of the human intellect at conjunction with the perfect source of knowledge."

Ibn Tufail (12th century)

In the 12th century, the Andalusian-Islamic philosopher and novelist, Ibn Tufail (known as Abubacer or Ebn Tophail in the West) demonstrated the theory of tabula rasa as a thought experiment through his Arabic philosophical novel, Hayy ibn Yaqdhan, in which he depicts the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone.

The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding.

Aquinas (13th century)

In the 13th century, St. Thomas Aquinas brought the Aristotelian and Avicennian notions to the forefront of Christian thought. These notions sharply contrasted with the previously-held Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body here on Earth (cf. Plato's Phaedo and Apology, as well as others). St. Bonaventure (also 13th century) was one of the fiercest intellectual opponents of Aquinas, offering some of the strongest arguments toward the Platonic idea of the mind.

Descartes (17th century)

Descartes, in his work The Search for Truth by Natural Light, summarizes an empiricist view in which he uses the words table rase, in French; in the following English translation, this was rendered tabula rasa:

All that seems to me to explain itself very clearly if we compare the imagination of children to a tabula rasa on which our ideas, which resemble portraits of each object taken from nature, should depict themselves. The senses, the inclinations, our masters and our intelligence, are the various painters who have the power of executing this work; and amongst them, those who are least adapted to succeed in it, i.e. the imperfect senses, blind instinct, and foolish nurses, are the first to mingle themselves with it. There finally comes the best of all, intelligence, and yet it is still requisite for it to have an apprenticeship of several years, and to follow the example of its masters for long, before daring to rectify a single one of their errors. In my opinion this is one of the principal causes of the difficulty we experience in attaining to true knowledge. For our senses really perceive that alone which is most coarse and common; our natural instinct is entirely corrupted; and as to our masters, although there may no doubt be very perfect ones found amongst them, they yet cannot force our minds to accept their reasoning before our understanding has examined it, for the accomplishment of this end pertains to it alone. But it is like a clever painter who might have been called upon to put the last touches on a bad picture sketched out by prentice hands, and who would probably have to employ all the rules of his art in correcting little by little first a trait here, then a trait there, and finally be required to add to it from his own hand all that was lacking, and who yet could not prevent great faults from remaining in it, because from the beginning the picture would have been badly conceived, the figures badly placed, and the proportions badly observed.

Locke (17th century)

The modern idea of the theory is attributed mostly to John Locke's expression of the idea in Essay Concerning Human Understanding, particularly using the term "white paper" in Book II, Chap. I, 2. In Locke's philosophy, tabula rasa was the theory that at birth the (human) mind is a "blank slate" without rules for processing data, and that data is added and rules for processing are formed solely by one's sensory experiences. The notion is central to Lockean empiricism; it serves as the starting point for Locke's subsequent explication (in Book II) of simple ideas and complex ideas.

As understood by Locke, tabula rasa meant that the mind of the individual was born blank, and it also emphasized the freedom of individuals to author their own soul. Individuals are free to define the content of their character—but basic identity as a member of the human species cannot be altered. This presumption of a free, self-authored mind combined with an immutable human nature leads to the Lockean doctrine of "natural" rights. Locke's idea of tabula rasa is frequently compared with Thomas Hobbes's viewpoint of human nature, in which humans are endowed with inherent mental content—particularly with selfishness.

Freud (19th century)

Tabula rasa also features in Sigmund Freud's psychoanalysis. Freud depicted personality traits as being formed by family dynamics (see Oedipus complex). Freud's theories imply that humans lack free will, but also that genetic influences on human personality are minimal. In Freudian psychoanalysis, one is largely determined by one's upbringing.

Science

Psychology and neurobiology

Psychologists and neurobiologists have shown evidence that initially, the entire cerebral cortex is programmed and organized to process sensory input, control motor actions, regulate emotion, and respond reflexively (under predetermined conditions). These programmed mechanisms in the brain subsequently act to learn and refine the ability of the organism. For example, psychologist Steven Pinker showed that—in contrast to written language—the brain is "hard-wired" at birth to acquire spoken language.

There have been claims by a minority in psychology and neurobiology, however, that the brain is tabula rasa only for certain behaviours. For instance, with respect to one's ability to acquire both general and special types of knowledge or skills, Michael Howe argued against the existence of innate talent. There also have been neurological investigations into specific learning and memory functions, such as Karl Lashley's study on mass action and serial interaction mechanisms.

Important evidence against the tabula rasa model of the mind comes from behavioural genetics, especially twin and adoption studies (see below). These indicate strong genetic influences on personal characteristics such as IQ, alcoholism, gender identity, and other traits. Critically, multivariate studies show that the distinct faculties of the mind, such as memory and reason, fractionate along genetic boundaries. Cultural universals such as emotion and the relative resilience of psychological adaptation to accidental biological changes also support basic biological mechanisms in the mind.

Social pre-wiring hypothesis

Twin studies have resulted in important evidence against the tabula rasa model of the mind, specifically, of social behaviour. The social pre-wiring hypothesis (also informally known as "wired to be social") refers to the ontogeny of social interaction. The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social.

Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behaviour. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behaviour cannot be attributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behaviour and identity through genetics.

Principal evidence of this theory is uncovered by examining twin pregnancies. The main argument is, if there are social behaviours that are inherited and developed before birth, then one should expect twin fetuses to engage in some form of social interaction before they are born. Thus, ten fetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin fetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins were not accidental but specifically aimed.

The social pre-wiring hypothesis was proven correct:

The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin fetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behaviour: when the context enables it, as in the case of twin fetuses, other-directed actions are not only possible but predominant over self-directed actions.

Computer science

In artificial intelligence, tabula rasa refers to the development of autonomous agents with a mechanism to reason and plan toward their goal, but no "built-in" knowledge-base of their environment. Thus, they truly are a blank slate.

In reality, autonomous agents possess an initial data-set or knowledge-base, but this cannot be immutable or it would hamper autonomy and heuristic ability. Even if the data-set is empty, it usually may be argued that there is a built-in bias in the reasoning and planning mechanisms. Either intentionally or unintentionally placed there by the human designer, it thus negates the true spirit of tabula rasa.

A synthetic (programming) language parser (LR(1), LALR(1) or SLR(1), for example) could be considered a special case of a tabula rasa, as it is designed to accept any of a possibly infinite set of source language programs, within a single programming language, and to output either a good parse of the program, or a good machine language translation of the program, either of which represents a success, or, alternately, a failure, and nothing else. The "initial data-set" is a set of tables which are generally produced mechanically by a parser table generator, usually from a BNF representation of the source language, and represents a "table representation" of that single programming language.

AlphaZero achieved superhuman performance in various board games using self-play and tabula rasa reinforcement learning, meaning it had no access to human games or hard-coded human knowledge about either game, only being given the rules of the games.

Systems science

From Wikipedia, the free encyclopedia
 
Impression of systems thinking about society
 

Systems Science, also referred to as Systems Research, or, simply, Systems, is an interdisciplinary field concerned with understanding systems—from simple to complex—in nature, society, cognition, engineering, technology and science itself. The field is diverse, spanning the formal, natural, social, and applied sciences.

To systems scientists, the world can be understood as a system of systems. The field aims to develop interdisciplinary foundations that are applicable in a variety of areas, such as psychology, biology, medicine, communication, business management, technology, computer science, engineering, and social sciences.

Systems science covers formal sciences such as complex systems, cybernetics, dynamical systems theory, information theory, linguistics or systems theory. It has applications in the field of the natural and social sciences and engineering, such as control theory, systems design, operations research, social systems theory, systems biology, system dynamics, human factors, systems ecology, computer science, systems engineering and systems psychology. Themes commonly stressed in system science are (a) holistic view, (b) interaction between a system and its embedding environment, and (c) complex (often subtle) trajectories of dynamic behavior that sometimes are stable (and thus reinforcing), while at various 'boundary conditions' can become wildly unstable (and thus destructive). Concerns about Earth-scale biosphere/geosphere dynamics is an example of the nature of problems to which systems science seeks to contribute meaningful insights.

Associated fields

Systems notes of Henk Bikker, TU Delft, 1991

The systems sciences are a broad array of fields. One way of conceiving of these is in three groups: fields that have developed systems ideas primarily through theory; those that have done so primarily through practical engagements with problem situations; and those that have applied systems ideas in the context of other disciplines.

Theoretical fields

Chaos and dynamical systems

Complexity

Control theory

Cybernetics

Information theory

General systems theory

Hierarchy Theory

Practical fields

Critical systems thinking

Operations research and management science

Soft systems methodology

The soft systems methodology was developed in England by academics at the University of Lancaster Systems Department through a ten-year action research programme. The main contributor is Peter Checkland (born 18 December 1930, in Birmingham, UK), a British management scientist and emeritus professor of systems at Lancaster University.

Systems analysis

Systems analysis branch of systems science that analyzes systems, the interactions within those systems, or interaction with its environment, often prior to their automation as computer models. Systems analysis is closely associated with the RAND corporation.

Systemic design

Systemic design integrates methodologies from systems thinking with advanced design practices to address complex, multi-stakeholder situations.

Systems dynamics

System dynamics is an approach to understanding the behavior of complex systems over time. It offers "simulation technique for modeling business and social systems", which deals with internal feedback loops and time delays that affect the behavior of the entire system. What makes using system dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows.

Systems engineering

Systems engineering (SE) is an interdisciplinary field of engineering, that focuses on the development and organization of complex systems. It is the "art and science of creating whole solutions to complex problems", for example: signal processing systems, control systems and communication system, or other forms of high-level modelling and design in specific fields of engineering.

Applications in other disciplines

Earth system science

Systems biology

Systems chemistry

Systems ecology

Systems psychology

Systems scientists

General systems scientists can be divided into different generations. The founders of the systems movement like Ludwig von Bertalanffy, Kenneth Boulding, Ralph Gerard, James Grier Miller, George J. Klir, and Anatol Rapoport were all born between 1900 and 1920. They came from different natural and social science disciplines and joined forces in the 1950s to establish the general systems theory paradigm. Along with the organization of their efforts a first generation of systems scientists rose.

Among them were other scientists like Ackoff, Ashby, Margaret Mead and Churchman, who popularized the systems concept in the 1950s and 1960s. These scientists inspired and educated a second generation with more notable scientists like Ervin Laszlo (1932) and Fritjof Capra (1939), who wrote about systems theory in the 1970s and 1980s. Others got acquainted and started studying these works in the 1980s and started writing about it since the 1990s. Debora Hammond can be seen as a typical representative of these third generation of general systems scientists.

Organizations

The International Society for the Systems Sciences (ISSS) is an organisation for interdisciplinary collaboration and synthesis of systems sciences. The ISSS is unique among systems-oriented institutions in terms of the breadth of its scope, bringing together scholars and practitioners from academic, business, government, and non-profit organizations. Based on fifty years of tremendous interdisciplinary research from the scientific study of complex systems to interactive approaches in management and community development. This society was initially conceived in 1954 at the Stanford Center for Advanced Study in the Behavioral Sciences by Ludwig von Bertalanffy, Kenneth Boulding, Ralph Gerard, and Anatol Rapoport.

In the field of systems science the International Federation for Systems Research (IFSR) is an international federation for global and local societies in the field of systems science. This federation is a non-profit, scientific and educational agency founded in 1981, and constituted of some thirty member organizations from various countries. The overall purpose of this Federation is to advance cybernetic and systems research and systems applications and to serve the international systems community.

The best known research institute in the field is the Santa Fe Institute (SFI) located in Santa Fe, New Mexico, United States, dedicated to the study of complex systems. This institute was founded in 1984 by George Cowan, David Pines, Stirling Colgate, Murray Gell-Mann, Nick Metropolis, Herb Anderson, Peter A. Carruthers, and Richard Slansky. All but Pines and Gell-Mann were scientists with Los Alamos National Laboratory. SFI's original mission was to disseminate the notion of a separate interdisciplinary research area, complexity theory referred to at SFI as complexity science. Recently, IIT Jodhpur in Rajasthan, India started inculcating system science and engineering to its students through Bachelors, Masters and Doctorate programs. This makes it the first institution to offer system science education to students in India.

Multiple trace theory

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Multiple trace theory is a memory consolidation model advanced as an alternative model to strength theory. It posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes. Further support for this theory came in the 1960s from empirical findings that people could remember specific attributes about an object without remembering the object itself. The mode in which the information is presented and subsequently encoded can be flexibly incorporated into the model. This memory trace is unique from all others resembling it due to differences in some aspects of the item's attributes, and all memory traces incorporated since birth are combined into a multiple-trace representation in the brain. In memory research, a mathematical formulation of this theory can successfully explain empirical phenomena observed in recognition and recall tasks.

Attributes

The attributes an item possesses form its trace and can fall into many categories. When an item is committed to memory, information from each of these attributional categories is encoded into the item's trace. There may be a kind of semantic categorization at play, whereby an individual trace is incorporated into overarching concepts of an object. For example, when a person sees a pigeon, a trace is added to the "pigeon" cluster of traces within his or her mind. This new "pigeon" trace, while distinguishable and divisible from other instances of pigeons that the person may have seen within his or her life, serves to support the more general and overarching concept of a pigeon.

Physical

Physical attributes of an item encode information about physical properties of a presented item. For a word, this could include color, font, spelling, and size, while for a picture, the equivalent aspects could be shapes and colors of objects. It has been shown experimentally that people who are unable to recall an individual word can sometimes recall the first or last letter or even rhyming words, all aspects encoded in the physical orthography of a word's trace. Even when an item is not presented visually, when encoded, it may have some physical aspects based on a visual representation of the item.

Contextual

Contextual attributes are a broad class of attributes that define the internal and external features that are simultaneous with presentation of the item. Internal context is a sense of the internal network that a trace evokes. This may range from aspects of an individual's mood to other semantic associations the presentation of the word evokes. On the other hand, external context encodes information about the spatial and temporal aspects as information is being presented. This may reflect time of day or weather, for example. Spatial attributes can refer both to physical environment and imagined environment. The method of loci, a mnemonic strategy incorporating an imagined spatial position, assigns relative spatial positions to different items memorized and then "walking through" these assigned positions to remember the items.

Modal

Modality attributes possess information as to the method by which an item was presented. The most frequent types of modalities in an experimental setting are auditory and visual. Any sensory modality may be utilized practically.

Classifying

These attributes refer to the categorization of items presented. Items that fit into the same categories will have the same class attributes. For example, if the item "touchdown" were presented, it would evoke the overarching concept of "football" or perhaps, more generally, "sports", and it would likely share class attributes with "endzone" and other elements that fit into the same concept. A single item may fit into different concepts at the time it is presented depending on other attributes of the item, like context. For example, the word "star" might fall into the class of astronomy after visiting a space museum or a class with words like "celebrity" or "famous" after seeing a movie.

Mathematical formulation

The mathematical formulation of traces allows for a model of memory as an ever-growing matrix that is continuously receiving and incorporating information in the form of a vectors of attributes. Multiple trace theory states that every item ever encoded, from birth to death, will exist in this matrix as multiple traces. This is done by giving every possible attribute some numerical value to classify it as it is encoded, so each encoded memory will have a unique set of numerical attributes.

Matrix definition of traces

By assigning numerical values to all possible attributes, it is convenient to construct a column vector representation of each encoded item. This vector representation can also be fed into computational models of the brain like neural networks, which take as inputs vectorial "memories" and simulate their biological encoding through neurons.

Formally, one can denote an encoded memory by numerical assignments to all of its possible attributes. If two items are perceived to have the same color or experienced in the same context, the numbers denoting their color and contextual attributes, respectively, will be relatively close. Suppose we encode a total of L attributes anytime we see an object. Then, when a memory is encoded, it can be written as m1 with L total numerical entries in a column vector:

.

A subset of the L attributes will be devoted to contextual attributes, a subset to physical attributes, and so on. One underlying assumption of multiple trace theory is that, when we construct multiple memories, we organize the attributes in the same order. Thus, we can similarly define vectors m2, m3, ..., mn to account for n total encoded memories. Multiple trace theory states that these memories come together in our brain to form a memory matrix from the simple concatenation of the individual memories:

.

For L total attributes and n total memories, M will have L rows and n columns. Note that, although the n traces are combined into a large memory matrix, each trace is individually accessible as a column in this matrix.

In this formulation, the n different memories are made to be more or less independent of each other. However, items presented in some setting together will become tangentially associated by the similarity of their context vectors. If multiple items are made associated with each other and intentionally encoded in that manner, say an item a and an item b, then the memory for these two can be constructed, with each having k attributes as follows:

.

Context as a stochastic vector

When items are learned one after another, it is tempting to say that they are learned in the same temporal context. However, in reality, there are subtle variations in context. Hence, contextual attributes are often considered to be changing over time as modeled by a stochastic process. Considering a vector of only r total context attributes ti that represents the context of memory mi, the context of the next-encoded memory is given by ti+1:

so,

Here, ε(j) is a random number sampled from a Gaussian distribution.

Summed similarity

As explained in the subsequent section, the hallmark of multiple trace theory is an ability to compare some probe item to the pre-existing matrix of encoded memories. This simulates the memory search process, whereby we can determine whether we have ever seen the probe before as in recognition tasks or whether the probe gives rise to another previously encoded memory as in cued recall.

First, the probe p is encoded as an attribute vector. Continuing with the preceding example of the memory matrix M, the probe will have L entries:

.

This p is then compared one by one to all pre-existing memories (trace) in M by determining the Euclidean distance between p and each mi:

.

Due to the stochastic nature of context, it is almost never the case in multiple trace theory that a probe item exactly matches an encoded memory. Still, high similarity between p and mi is indicated by a small Euclidean distance. Hence, another operation must be performed on the distance that leads to very low similarity for great distance and very high similarity for small distance. A linear operation does not eliminate low-similarity items harshly enough. Intuitively, an exponential decay model seems most suitable:

where τ is a decay parameter that can be experimentally assigned. We can go on to then define similarity to the entire memory matrix by a summed similarity SS(p,M) between the probe p and the memory matrix M:

.

If the probe item is very similar to even one of the encoded memories, SS receives a large boost. For example, given m1 as a probe item, we will get a near 0 distance (not exactly due to context) for i=1, which will add nearly the maximal boost possible to SS. To differentiate from background similarity (there will always be some low similarity to context or a few attributes for example), SS is often compared to some arbitrary criterion. If it is higher than the criterion, then the probe is considered among those encoded. The criterion can be varied based on the nature of the task and the desire to prevent false alarms. Thus, multiple trace theory predicts that, given some cue, the brain can compare that cue to a criterion to answer questions like "has this cue been experienced before?" (recognition) or "what memory does this cue elicit?" (cued recall), which are applications of summed similarity described below.

Applications to memory phenomena

Recognition

Multiple trace theory fits well into the conceptual framework for recognition. Recognition requires an individual to determine whether or not they have seen an item before. For example, facial recognition is determining whether one has seen a face before. When asked this for a successfully encoded item (something that has indeed been seen before), recognition should occur with high probability. In the mathematical framework of this theory, we can model recognition of an individual probe item p by summed similarity with a criterion. We translate the test item into an attribute vector as done for the encoded memories and compared to every trace ever encountered. If summed similarity passes the criterion, we say we have seen the item before. Summed similarity is expected to be very low if the item has never been seen but relatively higher if it has due to the similarity of the probe's attributes to some memory of the memory matrix.

This can be applied both to individual item recognition and associative recognition for two or more items together.

Cued recall

The theory can also account for cued recall. Here, some cue is given that is meant to elicit an item out of memory. For example, a factual question like "Who was the first President of the United States?" is a cue to elicit the answer of "George Washington". In the "ab" framework described above, we can take all attributes present in a cue and list consider these the a item in an encoded association as we try to recall the b portion of the mab memory. In this example, attributes like "first", "President", and "United States" will be combined to form the a vector, which will have already been formulated into the mab memory whose b values encode "George Washington". Given a, there are two popular models for how we can successfully recall b:

1) We can go through and determine similarity (not summed similarity, see above for distinction) to every item in memory for the a attributes, then pick whichever memory has the highest similarity for the a. Whatever b-type attributes we are linked to gives what we recall. The mab memory gives best chance of recall since its a elements will have high similarity to the cue a. Still, since recall does not always occur, we can say that the similarity must pass a criterion for recall to occur at all. This is similar to how the IBM machine Watson operates. Here, the similarity compares only the a-type attributes of a to mab.

2) We can use a probabilistic choice rule to determine probability of recalling an item as proportional to its similarity. This is akin to throwing a dart at a dartboard with bigger areas represented by larger similarities to the cue item. Mathematically speaking, given the cue a, the probability of recalling the desired memory mab is:

In computing both similarity and summed similarity, we only consider relations among a-type attributes. We add the error term because without it, the probability of recalling any memory in M will be 1, but there are certainly times when recall does not occur at all.

Other common results explained

Phenomena in memory associated with repetition, word frequency, recency, forgetting, and contiguity, among others, can be easily explained in the realm of multiple trace theory. Memory is known to improve with repeated exposure to items. For example, hearing a word several times in a list will improve recognition and recall of that word later on. This is because repeated exposure simply adds the memory into the ever-growing memory matrix, so summed similarity for this memory will be larger and thus more likely to pass the criterion.

In recognition, very common words are harder to recognize as part of a memorized list, when tested, than rare words. This is known as the word frequency effect and can be explained by multiple trace theory as well. For common words, summed similarity will be relatively high, whether the word was seen in the list or not, because it is likely that the word has been encountered and encoded in the memory matrix several times throughout life. Thus, the brain typically selects a higher criterion in determining whether common words are part of a list, making them harder to successfully select. However, rarer words are typically encountered less throughout life and so their presence in the memory matrix is limited. Hence, low overall summed similarity will lead to a more lax criterion. If the word was present in the list, high context similarity at time of test and other attribute similarity will lead to enough boost in summed similarity to excel past criterion and thus recognize the rare word successfully.

Recency in the serial position effect can be explained because more recent memories encoded will share a temporal context most similar to the present context, as the stochastic nature of time will not have had as pronounced an effect. Thus, context similarity will be high for recently encoded items, so overall similarity will be relatively higher for these items as well. The stochastic contextual drift is also thought to account for forgetting because the context in which a memory was encoded is lost over time, so summed similarity for an item only presented in that context will decrease over time.

Finally, empirical data have shown a contiguity effect, whereby items that are presented together temporally, even though they may not be encoded as a single memory as in the "ab" paradigm described above, are more likely to be remembered together. This can be considered a result of low contextual drift between items remembered together, so the contextual similarity between two items presented together is high.

Shortcomings

One of the biggest shortcomings of multiple trace theory is the requirement of some item with which to compare the memory matrix when determining successful encoding. As mentioned above, this works quite well in recognition and cued recall, but there is a glaring inability to incorporate free recall into the model. Free recall requires an individual to freely remember some list of items. Although the very act of asking to recall may act as a cue that can then elicit cued recall techniques, it is unlikely that the cue is unique enough to reach a summed similarity criterion or to otherwise achieve a high probability of recall.

Another major issue lies in translating the model to biological relevance. It is hard to imagine that the brain has unlimited capacity to keep track of such a large matrix of memories and continue expanding it with every item with which it has ever been presented. Furthermore, searching through this matrix is an exhaustive process that would not be relevant on biological time scales.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...