Trade involves the transfer of goods and services from one
person or entity to another, often in exchange for money. Economists
refer to a system or network that allows trade as a market.
An early form of trade, barter, saw the direct exchange of goods and services for other goods and services,[1] i.e. trading things without the use of money.[1] Modern traders generally negotiate through a medium of exchange, such as money. As a result, buying can be separated from selling, or earning. The invention of money (and letters of credit, paper money, and non-physical money) greatly simplified and promoted trade. Trade between two traders is called bilateral trade, while trade involving more than two traders is called multilateral trade.
In one modern view, trade exists due to specialization and the division of labor, a predominant form of economic activity
in which individuals and groups concentrate on a small aspect of
production, but use their output in trade for other products and needs.[2] Trade exists between regions because different regions may have a comparative advantage (perceived or real) in the production of some trade-able commodity –
including the production of scarce or limited natural resources
elsewhere. For example, different regions' sizes may encourage mass production. In such circumstances, trading at market price
between locations can benefit both locations. Different types of
traders may specialize in trading different kinds of goods; for example,
the spice trade and grain trade have both historically been important in the development of a global, international economy.
A busy market in Mile 12. Lagos - Nigeria
Retail trade consists of the sale of goods or merchandise from a very fixed location[3] (such as a department store, boutique, or kiosk), online or by mail, in small or individual lots for direct consumption or use by the purchaser.[4]Wholesale trade is the traffic in goods that are sold as merchandise to retailers, industrial, commercial, institutional, or other professional business users, or to other wholesalers and related subordinated services.
Historically, openness to free trade
substantially increased in some areas from 1815 until the outbreak of
World War I in 1914. Trade openness increased again during the 1920s but
collapsed (in particular in Europe and North America) during the Great Depression of the 1930s. Trade openness increased substantially again from the 1950s onward (albeit with a slowdown during the oil crisis of the 1970s). Economists and economic historians contend that current levels of trade openness are the highest they have ever been.[5][6][7]
Etymology
Trade is from Middle Englishtrade ("path, course of conduct"), introduced into English by Hanseatic merchants, from Middle Low Germantrade ("track, course"), from Old Saxontrada ("spoor, track"), from Proto-Germanic*tradÅ ("track, way"), and cognate with Old Englishtredan ("to tread").
Commerce is derived from the Latincommercium, from cum "together" and merx, "merchandise."[8]
In the Mediterranean region, the earliest contact between cultures involved members of the species Homo sapiens, principally using the Danube river, at a time beginning 35,000–30,000 BP.[10][11][12][13][need quotation to verify]
Some[who?] trace the origins of commerce to the very start of transactions in prehistoric times. Apart from traditional self-sufficiency, trading became a principal facility for prehistoric people, who bartered what they had for goods and services from each other.
The caduceus, traditionally associated with Mercury (the Roman patron-god of merchants), continues in use as a symbol of commerce.[14]
Trade is believed to have taken place throughout much of recorded human history. There is evidence of the exchange of obsidian and flint during the Stone Age. Trade in obsidian is believed to have taken place in New Guinea from 17,000 BCE.[15][16]
The earliest use of obsidian in the Near East dates to the Lower and Middle paleolithic.[17]
Archaeological evidence of obsidian use provides data on how this material was increasingly the preferred choice rather than chert from the late Mesolithic to Neolithic, requiring exchange as deposits of obsidian are rare in the Mediterranean region.[22][23][24]
Obsidian provided the material to make cutting utensils or tools,
although since other more easily obtainable materials were available,
use was exclusive to the higher status of the tribe using "the rich
man's flint".[25] Interestingly, Obsidian has held its value relative to flint.
Early traders traded Obsidian at distances of 900 kilometres within the Mediterranean region.[26]
Trade in the Mediterranean during the Neolithic of Europe was greatest in this material.[22][27] Networks were in existence at around 12,000 BCE[28] Anatolia was the source primarily for trade with the Levant, Iran and Egypt according to Zarins study of 1990.[29][30][31]Melos and Lipari sources produced among the most widespread trading in the Mediterranean region as known to archaeology.[32]
The Sari-i-Sang mine in the mountains of Afghanistan was the largest source for trade of lapis lazuli.[33][34] The material was most largely traded during the Kassite period of Babylonia beginning 1595 BCE.[35][36]
Later trade
Mediterranean and Near East
Ebla was a prominent trading center during the third millennia BCE, with a network reaching into Anatolia and north Mesopotamia.[32][37][38][39]
A map of the Silk Road trade route between Europe and Asia
Materials used for creating jewelry were traded with Egypt since 3000 BCE. Long-range trade routes first appeared in the 3rd millennium BCE, when Sumerians in Mesopotamia traded with the Harappan civilization of the Indus Valley. The Phoenicians were noted sea traders, traveling across the Mediterranean Sea, and as far north as Britain for sources of tin to manufacture bronze. For this purpose they established trade colonies the Greeks called emporia.[40]
Along the coast of the Mediterranean, researchers have found a positive
relationship between how well-connected a coastal location was and the
local prevalence of archaeological sites from the Iron Age. This
suggests that a location's trade potential was an important determinant
of human settlements.[41]
From the beginning of Greek civilization until the fall of the Roman Empire in the 5th century, a financially lucrative trade brought valuable spice to Europe from the far east, including India and China. Roman commerce allowed its empire to flourish and endure. The latter Roman Republic and the Pax Romana
of the Roman empire produced a stable and secure transportation network
that enabled the shipment of trade goods without fear of significant piracy, as Rome had become the sole effective sea power in the Mediterranean with the conquest of Egypt and the near east.[42]
In ancient Greece Hermes was the god of trade[43][44] (commerce) and weights and measures.[45] In ancient Rome, Mercurius was the god of merchants, whose festival was celebrated by traders on the 25th day of the fifth month.[46][47]
The concept of free trade was an antithesis to the will and economic
direction of the sovereigns of the ancient Greek states. Free trade
between states was stifled by the need for strict internal controls (via
taxation) to maintain security within the treasury of the sovereign,
which nevertheless enabled the maintenance of a modicum of civility within the structures of functional community life.[48][49]
The fall of the Roman empire and the succeeding Dark Ages brought instability to Western Europe
and a near-collapse of the trade network in the western world. Trade,
however, continued to flourish among the kingdoms of Africa, the Middle
East, India, China, and Southeast Asia. Some trade did occur in the
west. For instance, Radhanites were a medieval guild or group (the precise meaning of the word is lost to history) of Jewish merchants who traded between the Christians in Europe and the Muslims of the Near East.[50]
The first true maritime trade network in the Indian Ocean was by the Austronesian peoples of Island Southeast Asia.[51] Initiated by the indigenous peoples of Taiwan and the Philippines, the Maritime Jade Road
was an extensive trading network connecting multiple areas in Southeast
and East Asia. Its primary products were made of jade mined from Taiwan
by Taiwanese indigenous peoples and processed mostly in the Philippines by indigenous Filipinos, especially in Batanes, Luzon, and Palawan. Some were also processed in Vietnam, while the peoples of Malaysia, Brunei, Singapore, Thailand, Indonesia, and Cambodia
also participated in the massive trading network. The maritime road is
one of the most extensive sea-based trade networks of a single
geological material in the prehistoric world. It was in existence for at
least 3,000 years, where its peak production was from 2000 BCE to 500
CE, older than the Silk Road in mainland Eurasia and the later Maritime Silk Road.
The Maritime Jade Road began to wane during its final centuries from
500 CE until 1000 CE. The entire period of the network was a golden age
for the diverse societies of the region.[52][53][54][55]
Tajadero or axe money used as currency in Mesoamerica. It had a fixed worth of 8,000 cacao seeds, which were also used as currency.[60]
The emergence of exchange networks in the Pre-Columbian societies of
and near to Mexico are known to have occurred within recent years before
and after 1500 BCE.[61]
Trade networks reached north to Oasisamerica. There is evidence of established maritime trade with the cultures of northwestern South America and the Caribbean.
Middle Ages
During the Middle Ages, commerce developed in Europe by trading luxury goods at trade fairs. Wealth became converted into movable wealth or capital.
Banking systems developed where money on account was transferred across
national boundaries. Hand to hand markets became a feature of town life
and were regulated by town authorities.
Western Europe established a complex and expansive trade network with cargo ships being the main carrier of goods; Cogs and Hulks are two examples of such cargo ships.[62] Many ports would develop their own extensive trade networks. The English port city of Bristol traded with peoples from what is modern day Iceland, all along the western coast of France, and down to what is now Spain.[63]
During the Middle Ages, Central Asia was the economic center of the world.[64] The Sogdians dominated the east–west trade route known as the Silk Road after the 4th century CE up to the 8th century CE, with Suyab and Talas ranking among their main centers in the north. They were the main caravan merchants of Central Asia.
From the Middle Ages, the maritime republics, in particular Venice, Pisa and Genoa, played a key role in trade along the Mediterranean. From the 11th to the late 15th centuries, the Venetian Republic and the Republic of Genoa
were major trade centers. They dominated trade in the Mediterranean and
the Black Sea, having the monopoly between Europe and the Near East for
centuries.[65][66]
From the 8th to the 11th century, the Vikings and Varangians traded as they sailed from and to Scandinavia. Vikings sailed to Western Europe, while Varangians to Russia. The Hanseatic League was an alliance of trading cities that maintained a trade monopoly over most of Northern Europe and the Baltic, between the 13th and 17th centuries.
The Age of Sail and the Industrial Revolution
Portuguese explorer Vasco da Gama pioneered the European spice trade in 1498 when he reached Calicut after sailing around the Cape of Good Hope
at the southern tip of the African continent. Prior to this, the flow
of spice into Europe from India was controlled by Islamic powers,
especially Egypt. The spice trade was of major economic importance and
helped spur the Age of Discovery
in Europe. Spices brought to Europe from the Eastern world were some of
the most valuable commodities for their weight, sometimes rivaling gold.
Founded in 1352, the Bengal Sultanate was a major trading nation in the world and often referred to by Europeans as the wealthiest country with which to trade.[68]
In the 16th and 17th centuries, the Portuguese gained an economic advantage in the Kingdom of Kongo due to different philosophies of trade.[67]
Whereas Portuguese traders concentrated on the accumulation of capital,
in Kongo spiritual meaning was attached to many objects of trade.
According to economic historian Toby Green, in Kongo "giving more than receiving was a symbol of spiritual and political power and privilege."[67]
In the 16th century, the Seventeen Provinces were the center of free trade, imposing no exchange controls, and advocating the free movement of goods. Trade in the East Indies was dominated by Portugal in the 16th century, the Dutch Republic in the 17th century, and the British in the 18th century. The Spanish Empire developed regular trade links across both the Atlantic and the Pacific Oceans.
In 1776, Adam Smith published the paper An Inquiry into the Nature and Causes of the Wealth of Nations. It criticized Mercantilism, and argued that economic specialization could benefit nations just as much as firms. Since the division of labour
was restricted by the size of the market, he said that countries having
access to larger markets would be able to divide labour more
efficiently and thereby become more productive. Smith said that he considered all rationalizations of import and export controls "dupery", which hurt the trading nation as a whole for the benefit of specific industries.
In 1799, the Dutch East India Company, formerly the world's largest company, became bankrupt, partly due to the rise of competitive free trade.
When an inefficient producer sends the merchandise it
produces best to a country able to produce it more efficiently, both
countries benefit.
The ascendancy of free trade was primarily based on national
advantage in the mid 19th century. That is, the calculation made was
whether it was in any particular country's self-interest to open its
borders to imports.
John Stuart Mill proved that a country with monopoly pricing power on the international market could manipulate the terms of trade through maintaining tariffs, and that the response to this might be reciprocity
in trade policy. Ricardo and others had suggested this earlier. This
was taken as evidence against the universal doctrine of free trade, as
it was believed that more of the economic surplus of trade would accrue to a country following reciprocal, rather than completely free, trade policies. This was followed within a few years by the infant industry scenario developed by Mill promoting the theory that the government had the duty to protect
young industries, although only for a time necessary for them to
develop full capacity. This became the policy in many countries
attempting to industrialize and out-compete English exporters. Milton Friedman
later continued this vein of thought, showing that in a few
circumstances tariffs might be beneficial to the host country; but never
for the world at large.[69]
20th century
The Great Depression
was a major economic recession that ran from 1929 to the late 1930s.
During this period, there was a great drop in trade and other economic
indicators.
The lack of free trade was considered by many as a principal cause of the depression causing stagnation and inflation.[70] Only during World War II did the recession end in the United States. Also during the war, in 1944, 44 countries signed the Bretton Woods Agreement, intended to prevent national trade barriers, to avoid depressions. It set up rules and institutions to regulate the international political economy: the International Monetary Fund
and the International Bank for Reconstruction and Development (later
divided into the World Bank $ Bank for International Settlements).
These organizations became operational in 1946 after enough countries
ratified the agreement. In 1947, 23 countries agreed to the General Agreement on Tariffs and Trade to promote free trade.[71]
The European Union became the world's largest exporter of manufactured goods and services, the biggest export market for around 80 countries.[72]
Today, trade is merely a subset within a complex system of companies which try to maximize their profits by offering products and services to the market (which consists both of individuals and other companies) at the lowest production cost. A system of international trade has helped to develop the world economy but, in combination with bilateral or multilateral agreements to lower tariffs or to achieve free trade, has sometimes harmed third-world markets for local products.
Free trade is a policy by which a government does not discriminate
against imports or exports by applying tariffs or subsidies. This policy
is also known as laissez-faire policy. This kind of policy does not
necessarily imply because a country will then abandon all control and
taxation of imports and exports.[73]
Free trade advanced further in the late 20th century and early 2000s:
EC was transformed into the European Union, which accomplished the
Economic and Monetary Union (EMU) in 2002, through introducing the Euro,
and creating this way a real single market between 13 member states as
of January 1, 2007.
Protectionism is the policy of restraining and discouraging trade
between states and contrasts with the policy of free trade. This policy
often takes the form of tariffs and restrictive quotas. Protectionist policies were particularly prevalent in the 1930s, between the Great Depression and the onset of World War II.
Judeao-Christian
teachings do not prohibit trade. They do prohibit fraud and dishonest
measures. Historically they forbade charging interest on loans.[76][77]
The first instances of money were objects with intrinsic value. This is called commodity money
and includes any commonly available commodity that has intrinsic value;
historical examples include pigs, rare seashells, whale's teeth, and
(often) cattle. In medieval Iraq, bread was used as an early form of
money. In the Aztec Empire, under the rule of Montezuma cocoa beans became legitimate currency.[78]
Currency
was introduced as standardised money to facilitate a wider exchange of
goods and services. This first stage of currency, where metals were
used to represent stored value, and symbols to represent commodities,
formed the basis of trade in the Fertile Crescent for over 1500 years.
Numismatists have examples of coins from the earliest large-scale societies, although these were initially unmarked lumps of precious metal.[79]
The Doha round of World Trade Organization negotiations aimed to lower barriers to trade around the world, with a focus on making trade fairer for developing countries. Talks have been hung over a divide between the rich developed countries, represented by the G20, and the major developing countries. Agricultural subsidies
are the most significant issue upon which agreement has been the
hardest to negotiate. By contrast, there was much agreement on trade facilitation and capacity building. The Doha round began in Doha, Qatar, and negotiations were continued in: Cancún, Mexico; Geneva, Switzerland; and Paris, France, and Hong Kong.[citation needed]
China
Beginning around 1978, the government of the People's Republic of China (PRC) began an experiment in economic reform. In contrast to the previous Soviet-style centrally planned economy,
the new measures progressively relaxed restrictions on farming,
agricultural distribution and, several years later, urban enterprises
and labor. The more market-oriented approach reduced inefficiencies and
stimulated private investment, particularly by farmers, which led to
increased productivity and output. One feature was the establishment of
four (later five) Special Economic Zones located along the South-east coast.[80]
The reforms proved spectacularly successful in terms of increased output, variety, quality, price and demand.
In real terms, the economy doubled in size between 1978 and 1986,
doubled again by 1994, and again by 2003. On a real per capita basis,
doubling from the 1978 base took place in 1987, 1996 and 2006. By 2008,
the economy was 16.7 times the size it was in 1978, and 12.1 times its
previous per capita levels. International trade progressed even more
rapidly, doubling on average every 4.5 years. Total two-way trade in
January 1998 exceeded that for all of 1978; in the first quarter of
2009, trade exceeded the full-year 1998 level. In 2008, China's two-way
trade totaled US$2.56 trillion.[81]
International trade is the exchange of goods and services across national borders. In most countries, it represents a significant part of GDP. While international trade has been present throughout much of history (see Silk Road, Amber Road), its economic, social, and political importance have increased in recent centuries, mainly because of Industrialization, advanced transportation, globalization, multinational corporations, and outsourcing.[citation needed]
Empirical evidence for the success of trade can be seen in the contrast between countries such as South Korea, which adopted a policy of export-oriented industrialization,
and India, which historically had a more closed policy. South Korea
has done much better by economic criteria than India over the past fifty
years, though its success also has to do with effective state
institutions.[84]
Trade sanctions
Trade sanctions against a specific country are sometimes imposed, in order to punish that country for some action. An embargo,
a severe form of externally imposed isolation, is a blockade of all
trade by one country on another. For example, the United States has had
an embargo against Cuba for over 40 years.[85] Embargoes are usually on a temporary basis. For example, Armenia
put a temporary embargo on Turkish products and bans any imports from
Turkey on December 31, 2020. The situation is prompted by food security
concerns given Turkey's hostile attitude towards Armenia.[86]
Importing firms voluntarily adhere to fair trade standards or governments may enforce them through a combination of employment and commercial law. Proposed and practiced fair trade policies vary widely, ranging from the common prohibition of goods made using slave labour to minimum price support schemes such as those for coffee in the 1980s. Non-governmental organizations also play a role in promoting fair trade standards by serving as independent monitors of compliance with labeling requirements.[88][89] As such, it is a form of Protectionism.
Social learning (social pedagogy) is learning that takes place at a wider scale than individual or group learning, up to a societal scale, through social interaction between peers.
Definition
Social learning
is defined as learning through the observation of other people's
behaviors. It is a process of social change in which people learn from
each other in ways that can benefit wider social-ecological systems.
Different social contexts allow individuals to pick up new behaviors by
observing what people are doing within that environment.
Social learning and social pedagogy emphasize the dynamic interaction
between people and the environment in the construction of meaning and
identity.
The process of learning a new behaviour starts by observing a
behaviour, taking the information in and finally adopting that
behaviour. Examples of environmental contexts that promote social
learning are schools, media, family members and friends.
If learning is to be considered as social, then it must:
demonstrate that a change in understanding has taken place in the individuals involved;
demonstrate that this change goes beyond the individual and becomes
situated within wider social units or communities of practice;
occur through social interactions and processes between actors within a social network.
It is a theoretical system that focuses on the development of the
child and how practice and training affect their life skills. This idea
is centered around the notion that children are active and competent.
History
18th century
Jean-Jacques Rousseau
brings forth the idea that all humans are born good but are ultimately
corrupted by society, implying a form of social learning.
19th century
The literature on the topic of social pedagogy tends to identify German educator Karl Mager (1810-1858) as the person who coined the term ‘social pedagogy’ in 1844. Mager and Friedrich Adolph Diesterweg
shared the belief that education should go beyond the individual's
acquisition of knowledge and focus on the acquisition of culture by
society. Ultimately, it should benefit the community itself.
The founding father of social pedagogy, German philosopher and educator Paul Natorp (1854-1924) published the book SozialpÀdagogik: Theorie der Willensbildung auf der Grundlage der Gemeinschaft
(Social Pedagogy: The theory of educating the human will into a
community asset) in 1899. Natorp argued that in all instances, pedagogy
should be social. Teachers should consider the interaction between
educational and societal processes.
1950s - 1990s
The field of developmental psychology
underwent significant changes during these decades as social learning
theories started to gain traction through the research and experiments
of Psychologists such as Julian Rotter, Albert Bandura and Robert Sears.
In 1954, Julian Rotter developed his social learning theory which
linked human behavior changes with environmental interactions.
Predictable variables were behavior potential, expectancy, reinforcement
value and psychological situation. Bandura conducted his bobo doll experiment in 1961 and developed his social learning theory in 1977.
These contributions to the field of developmental psychology cemented a
strong knowledge foundation and allowed researchers to build on and
expand our understanding of human behavior.
Theories
Jean-Jacques Rousseau - Natural Man
Jean-Jacques Rousseau (1712 - 1778), with his book Emile, or On Education,
introduced his pedagogic theory where the child should be brought up in
harmony with nature. The child should be introduced to society only
during the fourth stage of development, the age of moral self-worth (15
to 18 years of age). That way, the child enters society in an informed
and self-reliable manner, with one's own judgment. Rousseau's
conceptualization of childhood and adolescence is based on his theory
that human beings are inherently good but corrupted a society that
denaturalize them. Rousseau is the precursor of the child-centered
approach in education.
Karl Mager - Social Pedagogy
Karl Mager
(1810 - 1858) is often identified as the one who coined the term social
pedagogy. He held the belief that education should focus on the
acquisition of knowledge but also of culture through society and should
orient its activities to benefit the community. It also implies that
knowledge should not solely come from individuals but also from the
larger concept of society.
Paul Natorp - Social Pedagogy
Paul Natorp (1854 - 1924) was a German philosopher and educator. In 1899, he published SozialpÀdagogik: Theorie der Willensbildung auf der Grundlage der Gemeinschaft
(Social Pedagogy: The theory of educating the human will into a
community asset). According to him, education should be social, thus an
interaction between educational and social processes. Natorp believed in
the model of Gemeinshaft (small community) in order to build universal
happiness and achieve true humanity. At the time, philosophers like Jean-Jacques Rousseau, John Locke, Johann Heinrich Pestalozzi and Immanuel Kant
were preoccupied by the structure of society and how it may influence
human interrelations. Philosophers were not solely thinking of the child
as an individual but rather at what he/she can bring to creating human
togetherness and societal order.
Natorp's perspective was influenced by Plato's ideas about the relation between the individual and the city-state (polis). The polis
is a social and political structure of society that, according to
Plato, allows individuals to maximize their potential. It is strictly
structured with classes serving others and philosopher kings setting
universal laws and truths for all. Furthermore, Plato argued for the
need to pursue intellectual virtues rather than personal advancements
such as wealth and reputation. Natorp's interpretation of the concept of the polis
is that an individual will want to serve his/her community and state
after having been educated, as long as the education is social (SozialpÀdagogik).
Natorp focused on education for the working class as well as
social reform. His view of social pedagogy outlined that education is a
social process and social life is an educational process. Social
pedagogic practices are a deliberative and rational form of
socialization. Individuals become social human beings by being
socialized into society. Social pedagogy involves teachers and children
sharing the same social spaces.
Herman Nohl - Hermeneutic Perspective
Herman
Nohl (1879 - 1960) was a German pedagogue of the first half of the
twentieth century. He interpreted reality from a hermeneutical
perspective (methodological principles of interpretation) and tried to
expose the causes of social inequalities. According to Nohl, social
pedagogy's aim is to foster the wellbeing of student by integrating into
society youth initiatives, programs and efforts. Teachers should be
advocates for the welfare of their students and contribute to the social
transformations it entails. Nohl conceptualized a holistic educative
process that takes into account the historical, cultural, personal and
social contexts of any given situation.
Robert Sears - Social Learning
Robert Richardson Sears (1908 - 1989) focused his research mostly on the stimulus-response theory.
Much of his theoretical effort was expended on understanding the way
children come to internalize the values, attitudes, and behaviours of
the culture in which they are raised. Just like Albert Bandura, he
focused most of his research on aggression, but also on the growth of
resistance to temptation and guilt, and the acquisition of
culturally-approved sex-role behaviors. Sears wanted to prove the
importance of the place of parents in the child's education,
concentrating on features of parental behaviour that either facilitated
or hampered the process. Such features include both general relationship
variables such as parental warmth and permissiveness and specific
behaviours such as punishment in the form of love withdrawal and power
assertion.
Albert Bandura - Social Learning
Albert Bandura
advanced the social learning theory by including the individual and the
environment in the process of learning and imitating behaviour. In
other words, children and adults learn or change behaviours by imitating
behaviours observed in others. Albert Bandura mentions that the
environment plays an important role as it is the stimuli
that triggers the learning process. For example, according to Bandura
(1978), people learn aggressive behaviour through 3 sources: Family
members, community and mass media. Research shows that parent who prefer
aggressive solution to solve their problems tend to have children who
use aggressive tactics to deal with other people. Research also found
that communities in which fighting prowess are valued have a higher rate
of aggressive behaviour. Also, findings show that watching televisions
can have at least 4 different effect on people: 1) it teaches aggressive
style of conduct, 2) it alters restraints over aggressive behavior,3)
it desensitizes and habituate people to violence and 4) it shapes
people's image of reality.
The environment also allows people to learn through another person's
experience. For example, students don't cheat on exams (at least no
openly) because they know the consequences of it, even if they never
experienced the consequences themselves
However, still according to Banduras, the learning process does
not stop at the influence of the family, community and media, the
internal process (individual thoughts, values, etc.) will determine at
which frequency and which intensity an individual will imitate and adopt
a certain behaviour.
Indeed, parents plays an important role in a child's education for two
reasons: Firstly, because of the frequency and intensity of the
interactions and secondly because the children often admire their parent
and often take them as role models.
Therefore, even if the stimuli is the parents' interactions with their
children, if their child did not admire them, their children would not
reproduce their behaviour as often. That is the main difference between
early social learning theory and Bandura's point of view. This principle is called reciprocal determinism,
which means that the developmental process is bidirectional, and that
the individual has to value his environment in order to learn for it.
Bandura also states that this process starts at births; indeed,
research shows that infants are more receptive to certain experiences
and less to others.
Albert Bandura also says that most human behaviours are driven by
goals and that we regulate our behaviour through weighing the benefits
and the troubles that we can get into because of a particular behaviour.
Application in education and pedagogy
Social learning and social pedagogy has proven its efficiency with the application in practical professions, like nursing,
where the student can observe a trained professional in a
professional/work settings, and they can learn about nursing throughout
all its aspects: interactions, attitudes, co-working skills and the
nursing job itself. Students who have taken part in social learning
state that they increased their nursing skills, and that it could only
be possible with a good learning environment, a good mentor, and a
student who is assertive enough.
It means that social learning can be achieved with a good mentor, but
one needs to be a good listener too. This mentoring experience creates
what Albert Bandura called observational learning, when students observe
a well-trained model/teacher and the students's knowledge and
understanding increase.
Experiences in the field for student teachers are a good way to
show how social pedagogy and social learning contribute to one's
education. Indeed, field experiences are part of a student's life in
their route to their teaching degree. Field experiences are based on the
social learning theory; a student follows a teacher for some time, at
first observing the cooperating teacher and taking notes about the
teaching act. The second part of the field experience is actual
teaching, and receiving feedback from the role model and the students.
The student teachers try as much as they can to imitate what they have
learned by observing their cooperating teacher.
Cyberbullying
being an issue in schools, social pedagogy can be a solution to
decrease this trend. Indeed, the bullied pupil can build a relationship
with a particular mentor or role model, which in return can empower the
student to deal with issues such as cyberbullying.
This can work both on the victim and the bully, since both may lack
confidence and affection. Using social pedagogy instead of punishments
and reactive actions is also a way to derive from the traditional model
of raising children, and teaching, which relies on punishments and
rewards.
Parent education is also based on social learning. From birth,
children look at their parents and try to model what they do, how they
talk, and what they think. Of course, a child's environment is much
larger than only their familiar environment, but it is an influential
part. A study by Dubanoski and Tanabe,
was made on parenting and social learning, where parents had to attend
classes that would teach them social learning principles to improve
their children's behaviour. The classes taught the parents how to record
objectively their children's behaviour, and to deal with them by
teaching the correct behaviour, not by punishing the wrong one. A
significant number of parents improve their children behaviour by the
end of the study.
The issue of how long social learning takes is important for the
design of learning initiatives, teaching experiences and policy
interventions. The process of going beyond individual learning to a
broader understanding situated in a community of practice can take some
time to develop. A longitudinal case study in Australia looked at an environmental group concerned about land degradation.
The whole project was led by a local committee, Wallatin Wildlife and
Landcare. They wanted to "encourage social learning among landholders
through field visits, focus groups, and deliberative processes to
balance innovative 'thinking outside the box' with judicious use of
public funds".
They found that social learning was documented after approximately
fifteen months, but was initially restricted to an increased
understanding of the problem without improved knowledge to address it.
Further knowledge necessary to address the problem in focus emerged
during the third year of the program. This suggests that learning
initiatives could take around three years to develop sufficient new
knowledge embedded in a community of practice in order to address
complex problems.
Social media and technology
Benefits
Social
pedagogy is in fact the interaction between society and the individual,
which create a learning experience. Therefore, if talking about the
current development of social pedagogy and social learning, the recent
trend in term of learning in our society, is the use of social media
and other forms of technology. On one side, if well designed within an
educational framework, social media can surely help with the development
of certain essential skills:
Therefore, it can be seen that social media can be extremely useful
for developing some of the key skills needed in this digital age. For
instance, “the main feature of social media is that they empower the end
user to access, create, disseminate and share information easily in a
user-friendly, open environment".
By using social media, the learning experience becomes easier and more
accessible to all. By allowing social media in the pedagogical program
of our young students, it could help them to grow and fully participate
in our digital society.
With the growing use of technology and different social platform
in many aspects of our life, we can use social media at work and at home
as well as in schools. It can be seen that social media now enables
teachers to set online group work, based on cases or projects, and
students can collect data in the field, without any need for direct
face-to-face contact with either the teacher or other students.
Disadvantages
The
benefits of social media in education stipulate how easier the
communication between individuals becomes. However, others will argue
that it excludes the vital tacit knowledge that direct, face-to-face
interpersonal contact enables, and that social learning is bound up with
physical and spatial learning. Social learning includes sharing
experiences and working with others. Social media facilitates those
experiences but make it less effective by eliminating the physical
interaction between individuals. The more time students spend on social
sites, the less time they spend socializing in person. Because of the
lack of nonverbal cues, like tone and inflection, the use of social
media is not an adequate replacement for face-to-face communication.
Students who spend a great amount of time on social networking sites are
less effective at communicating in person.
With the omnipresence of technology in our life and the easy
access to unlimited source of information, the difference between using
technology as a tool and not as an end in itself needs to be understood.
A scatterplot in which the areas of the sovereign states and dependent territories in the world are plotted on the vertical axis against their populations
on the horizontal axis. The upper plot uses raw data. In the lower
plot, both the area and population data have been transformed using the
logarithm function.
In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point zi is replaced with the transformed value yi = f(zi), where f is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs.
Nearly always, the function that is used to transform the data is invertible, and generally is continuous.
The transformation is usually applied to a collection of comparable
measurements. For example, if we are working with data on peoples'
incomes in some currency unit, it would be common to transform each person's income value by the logarithm function.
Motivation
Guidance
for how data should be transformed, or whether a transformation should
be applied at all, should come from the particular statistical analysis
to be performed. For example, a simple way to construct an approximate
95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if the population is substantially skewed
and the sample size is at most moderate, the approximation provided by
the central limit theorem can be poor, and the resulting confidence
interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetricdistributionefore constructing a confidence interval. If desired, the confidence
interval can then be transformed back to the original scale using the
inverse of the transformation that was applied to the data.
Data can also be transformed to make them easier to visualize.
For example, suppose we have a scatterplot in which the points are the
countries of the world, and the data values being plotted are the land
area and population of each country. If the plot is made using
untransformed data (e.g. square kilometers for area and the number of
people for population), most of the countries would be plotted in tight
cluster of points in the lower left corner of the graph. The few
countries with very large areas and/or populations would be spread
thinly around most of the graph's area. Simply rescaling units (e.g., to
thousand square kilometers, or to millions of people) will not change
this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph.
Another reason for applying data transformation is to improve
interpretability, even if no formal statistical analysis or
visualization is to be performed. For example, suppose we are comparing
cars in terms of their fuel economy. These data are usually presented as
"kilometers per liter" or "miles per gallon". However, if the goal is
to assess how much additional fuel a person would use in one year when
driving one car compared to another, it is more natural to work with the
data transformed by applying the reciprocal function, yielding liters per kilometer, or gallons per mile.
Data transformation may be used as a remedial measure to make data suitable for modeling with linear regression if the original data violates one or more assumptions of linear regression. For example, the simplest linear regression models assume a linear relationship between the expected value of Y (the response variable to be predicted) and each independent variable
(when the other independent variables are held fixed). If linearity
fails to hold, even approximately, it is sometimes possible to transform
either the independent or dependent variables in the regression model
to improve the linearity. For example, addition of quadratic functions of the original independent variables may lead to a linear relationship with expected value of Y, resulting in a polynomial regression model, a special case of linear regression.
Another assumption of linear regression is homoscedasticity, that is the variance of errors must be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data is heteroscedastic), it may be possible to find a transformation of Y alone, or transformations of both X (the predictor variables) and Y, such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables and linear regression may therefore be applied on these.
Yet another application of data transformation is to address the problem of lack of normality in error terms. Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss–Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality.
Transformations that stabilize the variance of error terms (i.e. those
that address heteroscedaticity) often also help make the error terms
approximately normal.
Examples
Equation:
Meaning: A unit increase in X is associated with an average of b units increase in Y.
Equation:
(From exponentiating both sides of the equation: )
Meaning: A unit increase in X is associated with an average increase of b units in , or equivalently, Y increases on an average by a multiplicative factor of . For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a unit increase in X would lead to a times increase in Y on an average. If b were 1, then this implies a 10-fold increase in Y for a unit increase in X
Equation:
Meaning: A k-fold increase in X is associated with an average of units increase in Y. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a tenfold increase in X would result in an average increase of units in Y
Equation:
(From exponentiating both sides of the equation: )
Meaning: A k-fold increase in X is associated with a multiplicative increase in Y on an average. Thus if X doubles, it would result in Y changing by a multiplicative factor of .
Alternative
Generalized linear models
(GLMs) provide a flexible generalization of ordinary linear regression
that allows for response variables that have error distribution models
other than a normal distribution. GLMs allow the linear model to be
related to the response variable via a link function and allow the
magnitude of the variance of each measurement to be a function of its
predicted value.
Common cases
The logarithm transformation and square root transformation are commonly used for positive data, and the multiplicative inverse transformation (reciprocal transformation) can be used for non-zero data. The power transformation
is a family of transformations parameterized by a non-negative value λ
that includes the logarithm, square root, and multiplicative inverse
transformations as special cases. To approach data transformation
systematically, it is possible to use statistical estimation
techniques to estimate the parameter λ in the power transformation,
thereby identifying the transformation that is approximately the most
appropriate in a given setting. Since the power transformation family
also includes the identity transformation, this approach can also
indicate whether it would be best to analyze the data without a
transformation. In regression analysis, this approach is known as the Box–Cox transformation.
The reciprocal transformation, some power transformations such as the Yeo–Johnson transformation, and certain other transformations such as applying the inverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values
(the power transformation is invertible over all real numbers if λ is
an odd integer). However, when both negative and positive values are
observed, it is sometimes common to begin by adding a constant to all
values, producing a set of non-negative data to which any power
transformation can be applied.
A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude.
Many physical and social phenomena exhibit such behavior — incomes,
species populations, galaxy sizes, and rainfall volumes, to name a few.
Power transforms, and in particular the logarithm, can often be used to
induce symmetry in such data. The logarithm is often favored because it
is easy to interpret its result in terms of "fold changes."
The logarithm also has a useful effect on ratios. If we are comparing positive quantities X and Y using the ratio X / Y, then if X < Y, the ratio is in the interval (0,1), whereas if X > Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where X and Y are treated symmetrically, the log-ratio log(X / Y) is zero in the case of equality, and it has the property that if X is K times greater than Y, the log-ratio is the equidistant from zero as in the situation where Y is K times greater than X (the log-ratios are log(K) and −log(K) in these two situations).
If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a logit transformation may be appropriate: this yields values in the range (−∞,∞).
Transforming to normality
1.
It is not always necessary or desirable to transform a data set to
resemble a normal distribution. However, if symmetry or normality are
desired, they can often be induced through one of the power
transformations.
2. A linguistic power function is distributed according to the Zipf-Mandelbrot law. The distribution is extremely spiky and leptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g. authorship attribution problems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation.
3. To assess whether normality has been achieved after transformation, any of the standard normality tests may be used. A graphical approach is usually more informative than a formal statistical test and hence a normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed.
Transforming to a uniform distribution or an arbitrary distribution
If we observe a set of n values X1, ..., Xn with no ties (i.e., there are n distinct values), we can replace Xi with the transformed value Yi = k, where k is defined such that Xi is the kth largest among all the X values. This is called the rank transform,[14] and creates data with a perfect fit to a uniform distribution. This approach has a population analogue.
From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If G is an invertible cumulative distribution function, and U is a uniformly distributed random variable, then the random variable G−1(U) has G as its cumulative distribution function.
Putting the two together, if X is any random variable, F is the invertible cumulative distribution function of X, and G is an invertible cumulative distribution function then the random variable G−1(F(X)) has G as its cumulative distribution function.
Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with different expected values.
As an example, in comparing different populations in the world, the
variance of income tends to increase with mean income. If we consider a
number of small area units (e.g., counties in the United States) and
obtain the mean and variance of incomes within each county, it is common
that the counties with higher mean income also have higher variances.
Univariate
functions can be applied point-wise to multivariate data to modify
their marginal distributions. It is also possible to modify some
attributes of a multivariate distribution using an appropriately
constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data generated by a random vector X are observed as vectors Xi of observations with covariance matrix Σ, a linear transformation can be used to decorrelate the data. To do this, the Cholesky decomposition is used to express Σ = AA'. Then the transformed vector Yi = A−1Xi has the identity matrix as its covariance matrix.
1848 edition of American Phrenological Journal published by Fowlers & Wells, New York City
Phrenology, created by Franz Joseph Gall
(1758–1828) and Johann Gaspar Spurzheim (1776–1832) and best known for
the idea that one's personality could be determined by the variation of
bumps on their skull, proposed that different regions in one's brain
have different functions and may very well be associated with different
behaviours.
Gall and Spurzheim were the first to observe the crossing of pyramidal
tracts, thus explaining why lesions in one hemisphere are manifested in
the opposite side of the body. However, Gall and Spurzheim did not
attempt to justify phrenology on anatomical grounds. It has been argued
that phrenology was fundamentally a science of race. Gall considered the
most compelling argument in favor of phrenology the differences in
skull shape found in sub-Saharan Africans and the anecdotal evidence
(due to scientific travelers and colonists) of their intellectual
inferiority and emotional volatility. In Italy, Luigi Rolando carried out lesion experiments and performed electrical stimulation of the brain, including the Rolandic area.
Phineas Gage's accident
Phineas Gage became one of the first lesion case studies in 1848 when an explosion drove a large iron rod completely through his head, destroying his left frontal lobe.
He recovered with no apparent sensory, motor, or gross cognitive
deficits, but with behaviour so altered that friends described him as
"no longer being Gage," suggesting that the damaged areas are involved
in "higher functions" such as personality. However, Gage's mental changes are usually grossly exaggerated in modern presentations.
Subsequent cases (such as Broca's patient Tan) gave further support to the doctrine of specialization.
In the XX century, in the process of treating epilepsy, Wilder Penfield produced maps of the location of various functions (motor, sensory, memory, vision) in the brain.
Major theories of the brain
Currently,
there are two major theories of the brain's cognitive function. The
first is the theory of modularity. Stemming from phrenology, this theory
supports functional specialization, suggesting the brain has different
modules that are domain specific in function. The second theory,
distributive processing, proposes that the brain is more interactive and
its regions are functionally interconnected rather than specialized.
Each orientation plays a role within certain aims and tend to
complement each other (see below section `Collaboration´).
Modularity
The
theory of modularity suggests that there are functionally specialized
regions in the brain that are domain specific for different cognitive
processes. Jerry Fodor
expanded the initial notion of phrenology by creating his Modularity of
the Mind theory. The Modularity of the Mind theory indicates that
distinct neurological regions called modules
are defined by their functional roles in cognition. He also rooted many
of his concepts on modularity back to philosophers like Descartes, who
wrote about the mind being composed of "organs" or "psychological
faculties". An example of Fodor's concept of modules is seen in
cognitive processes such as vision, which have many separate mechanisms
for colour, shape and spatial perception.
One of the fundamental beliefs of domain specificity and the theory of modularity suggests that it is a consequence of natural selection
and is a feature of our cognitive architecture. Researchers Hirschfeld
and Gelman propose that because the human mind has evolved by natural
selection, it implies that enhanced functionality would develop if it
produced an increase in "fit" behaviour. Research on this evolutionary
perspective suggests that domain specificity is involved in the
development of cognition because it allows one to pinpoint adaptive
problems.
An issue for the modular theory of cognitive neuroscience is that
there are cortical anatomical differences from person to person.
Although many studies of modularity are undertaken from very specific
lesion case studies, the idea is to create a neurological function map
that applies to people in general. To extrapolate from lesion studies
and other case studies this requires adherence to the universality assumption,
that there is no difference, in a qualitative sense, between subjects
who are intact neurologically. For example, two subjects would
fundamentally be the same neurologically before their lesions, and after
have distinctly different cognitive deficits. Subject 1 with a lesion
in the "A" region of the brain may show impaired functioning in
cognitive ability "X" but not "Y", while subject 2 with a lesion in area
"B" demonstrates reduced "Y" ability but "X" is unaffected; results
like these allow inferences to be made about brain specialization and
localization, also known as using a double dissociation.
The difficulty with this theory is that in typical non-lesioned
subjects, locations within the brain anatomy are similar but not
completely identical. There is a strong defense for this inherent
deficit in our ability to generalize when using functional localizing
techniques (fMRI, PET etc.). To account for this problem, the
coordinate-based Talairach and Tournoux stereotaxic system
is widely used to compare subjects' results to a standard brain using
an algorithm. Another solution using coordinates involves comparing
brains using sulcal reference points. A slightly newer technique is to use functional landmarks,
which combines sulcal and gyral landmarks (the groves and folds of the
cortex) and then finding an area well known for its modularity such as
the fusiform face area. This landmark area then serves to orient the researcher to the neighboring cortex.
Future developments for modular theories of neuropsychology may
lie in "modular psychiatry". The concept is that a modular understanding
of the brain and advanced neuro-imaging techniques will allow for a
more empirical diagnosis of mental and emotional disorders. There has
been some work done towards this extension of the modularity theory with
regards to the physical neurological differences in subjects with
depression and schizophrenia, for example. Zielasek and Gaeble have set
out a list of requirements in the field of neuropsychology in order to
move towards neuropsychiatry:
To assemble a complete overview of putative modules of the human mind
To establish module-specific diagnostic tests (specificity, sensitivity, reliability)
To assess how far individual modules, sets of modules or their connections are affected in certain psychopathological situations
To probe novel module-specific therapies like the facial affect
recognition training or to retrain access to context information in the
case of delusions and hallucinations, in which "hyper-modularity" may
play a role
Research in the study of brain function can also be applied to cognitive behaviour therapy.
As therapy becomes increasingly refined, it is important to
differentiate cognitive processes in order to discover their relevance
towards different patient treatments. An example comes specifically from
studies on lateral specialization between the left and right cerebral
hemispheres of the brain. The functional specialization of these
hemispheres are offering insight on different forms of cognitive
behaviour therapy methods, one focusing on verbal cognition (the main
function of the left hemisphere) and the other emphasizing imagery or
spatial cognition (the main function of the right hemisphere). Examples of therapies that involve imagery, requiring right hemisphere activity in the brain, include systematic desensitization and anxiety management training.
Both of these therapy techniques rely on the patient's ability to use
visual imagery to cope with or replace patients symptoms, such as
anxiety. Examples of cognitive behaviour therapies that involve verbal
cognition, requiring left hemisphere activity in the brain, include
self-instructional training and stress inoculation.
Both of these therapy techniques focus on patients' internal
self-statements, requiring them to use vocal cognition. When deciding
which cognitive therapy to employ, it is important to consider the
primary cognitive style of the patient. Many individuals have a tendency
to prefer visual imagery over verbalization and vice versa. One way of
figuring out which hemisphere a patient favours is by observing their
lateral eye movements. Studies suggest that eye gaze reflects the
activation of cerebral hemisphere contralateral to the direction. Thus,
when asking questions that require spatial thinking, individuals tend to
move their eyes to the left, whereas when asked questions that require
verbal thinking, individuals usually move their eyes to the right.
In conclusion, this information allows one to choose the optimal
cognitive behaviour therapeutic technique, thereby enhancing the
treatment of many patients.
Areas representing modularity in the brain
Fusiform face area
One of the most well known examples of functional specialization is the fusiform face area (FFA). Justine Sergent
was one of the first researchers that brought forth evidence towards
the functional neuroanatomy of face processing. Using positron emission
tomography (PET), Sergent found that there were different patterns of
activation in response to the two different required tasks, face
processing verses object processing.
These results can be linked with her studies of brain-damaged patients
with lesions in the occipital and temporal lobes. Patients revealed that
there was an impairment of face processing but no difficulty
recognizing everyday objects, a disorder also known as prosopagnosia. Later research by Nancy Kanwisher using functional magnetic resonance imaging (fMRI), found specifically that the region of the inferior temporal cortex, known as the fusiform gyrus,
was significantly more active when subjects viewed, recognized and
categorized faces in comparison to other regions of the brain. Lesion
studies also supported this finding where patients were able to
recognize objects but unable to recognize faces. This provided evidence
towards domain specificity in the visual system, as Kanwisher acknowledges the Fusiform Face Area as a module in the brain, specifically the extrastriate cortex, that is specialized for face perception.
Visual area V4 and V5
While looking at the regional cerebral blood flow (rCBF), using PET, researcher Semir Zeki directly demonstrated functional specialization within the visual cortex known as visual modularity, first in the monkeyand then in the human visual brain. He localized regions involved specifically in the perception of colour and vision motion, as well as of orientation (form). For colour, visual area V4 was located when subjects were shown two identical displays, one being multicoloured and the other shades of grey. This was further supported from lesion studies where individuals were unable to see colours after damage, a disorder known as achromatopsia.Combining PET and magnetic resonance imaging
(MRI), subjects viewing a moving checker board pattern verses a
stationary checker board pattern located visual area V5, which is now
considered to be specialized for vision motion. (Watson et al., 1993)
This area of functional specialization was also supported by lesion
study patients whose damage caused cerebral motion blindness, a condition now referred to as cerebral akinetopsia
Frontal lobes
Studies have found the frontal lobes to be involved in the executive functions of the brain, which are higher level cognitive processes.
This control process is involved in the coordination, planning and
organizing of actions towards an individual's goals. It contributes to
such things as one's behaviour, language and reasoning. More
specifically, it was found to be the function of the prefrontal cortex,
and evidence suggest that these executive functions control processes
such as planning and decision making, error correction and assisting
overcoming habitual responses. Miller and Cummings used PET and
functional magnetic imaging (fMRI) to further support functional
specialization of the frontal cortex. They found lateralization of
verbal working memory in the left frontal cortex and visuospatial
working memory in the right frontal cortex. Lesion studies support these
findings where left frontal lobe patients exhibited problems in
controlling executive functions such as creating strategies. The dorsolateral, ventrolateral and anterior cingulate
regions within the prefrontal cortex are proposed to work together in
different cognitive tasks, which is related to interaction theories.
However, there has also been evidence suggesting strong individual
specializations within this network.
For instance, Miller and Cummings found that the dorsolateral
prefrontal cortex is specifically involved in the manipulation and
monitoring of sensorimotor information within working memory.
Right and left hemispheres
During the 1960s, Roger Sperry
conducted a natural experiment on epileptic patients who had previously
had their corpora callosa cut. The corpus callosum is the area of the
brain dedicated to linking both the right and left hemisphere together.
Sperry et al.'s experiment was based on flashing images in the right and
left visual fields of his participants. Because the participant's
corpus callosum was cut, the information processed by each visual field
could not be transmitted to the other hemisphere. In one experiment,
Sperry flashed images in the right visual field (RVF), which would
subsequently be transmitted to the left hemisphere (LH) of the brain.
When asked to repeat what they had previously seen, participants were
fully capable of remembering the image flashed. However, when the
participants were then asked to draw what they had seen, they were
unable to. When Sperry et al. flashed images in the left visual field
(LVF), the information processed would be sent to the right hemisphere
(RH) of the brain. When asked to repeat what they had previously seen,
participants were unable to recall the image flashed, but were very
successful in drawing the image. Therefore, Sperry concluded that the
left hemisphere of the brain was dedicated to language as the
participants could clearly speak the image flashed. On the other hand,
Sperry concluded that the right hemisphere of the brain was involved in
more creative activities such as drawing.
Parahippocampal place area
Located in the parahippocampal gyrus,
the parahippocampal place area (PPA) was coined by Nancy Kanwisher and
Russell Epstein after an fMRI study showed that the PPA responds
optimally to scenes presented containing a spatial layout, minimally to
single objects and not at all to faces.
It was also noted in this experiment that activity remains the same in
the PPA when viewing a scene with an empty room or a room filled with
meaningful objects. Kanwisher and Epstein proposed "that the PPA
represents places by encoding the geometry of the local environment".
In addition, Soojin Park and Marvin Chun posited that activation in the
PPA is viewpoint specific, and so responds to changes in the angle of
the scene. In contrast, another special mapping area, the retrosplenial cortex (RSC), is viewpoint invariant or does not change response levels when views change. This perhaps indicates a complementary arrangement of functionally and anatomically separate visual processing brain areas.
Extrastriate body area
Located
in the lateral occipitotemporal cortex, fMRI studies have shown the
extrastriate body area (EBA) to have selective responding when subjects
see human bodies or body parts, implying that it has functional
specialization. The EBA does not optimally respond to objects or parts
of objects but to human bodies and body parts, a hand for example. In
fMRI experiments conducted by Downing et al. participants were asked to
look at a series of pictures. These stimuli includes objects, parts of
objects (for example just the head of a hammer), figures of the human
body in all sorts of positions and types of detail (including line
drawings or stick men), and body parts (hands or feet) without any body
attached. There was significantly more blood flow (and thus activation)
to human bodies, no matter how detailed, and body parts than to objects
or object parts.
Distributive processing
The
cognitive theory of distributed processing suggests that brain areas
are highly interconnected and process information in a distributed
manner.
A remarkable precedent of this orientation is the research of Justo Gonzalo on brain dynamics where several phenomena that he observed could not be explained by the
traditional theory of localizations. From the gradation he observed
between different syndromes in patients with different cortical lesions,
this author proposed in 1952 a functional gradients model,
which permits an ordering and an interpretation of multiple phenomena
and syndromes. The functional gradients are continuous functions through
the cortex describing a distributed specificity, so that, for a given
sensory system, the specific gradient, of contralateral character, is
maximum in the corresponding projection area and decreases in gradation
towards more "central" zone and beyond so that the final decline
reaches other primary areas.
As a consequence of the crossing and overlapping of the specific
gradients, in the central zone where the overlap is greater, there would
be an action of mutual integration, rather nonspecific (or multisensory) with bilateral character due to the corpus callosum.
This action would be maximum in the central zone and minimal towards
the projection areas. As the author stated (p. 20 of the English
translation)
"a functional continuity with regional variation is then offered, each
point of the cortex acquiring different properties but with certain
unity with the rest of the cortex. It is a dynamic conception of quantitative localizations".
A very similar gradients scheme was proposed by Elkhonon Goldberg in 1989.
Other researchers who provide evidence to support the theory of distributive processing include Anthony McIntosh and William Uttal,
who question and debate localization and modality specialization within
the brain.
McIntosh's research suggests that human cognition involves interactions
between the brain regions responsible for processes sensory information,
such as vision, audition, and other mediating areas like the prefrontal
cortex. McIntosh explains that modularity is mainly observed in sensory
and motor systems, however, beyond these very receptors, modularity
becomes "fuzzier" and you see the cross connections between systems
increase.
He also illustrates that there is an overlapping of functional
characteristics between the sensory and motor systems, where these
regions are close to one another. These different neural interactions
influence each other, where activity changes in one area influence other
connected areas. With this, McIntosh suggest that if you only focus on
activity in one area, you may miss the changes in other integrative
areas. Neural interactions can be measured using analysis of covariance in neuroimaging.
McIntosh used this analysis to convey a clear example of the
interaction theory of distributive processing. In this study, subjects
learned that an auditory stimulus signalled a visual event. McIntosh
found activation (an increase blood flow), in an area of the occipital cortex, a region of the brain involved in visual processing,
when the auditory stimulus was presented alone. Correlations between
the occipital cortex and different areas of the brain such as the prefrontal cortex, premotor cortex and superiortemporal cortex showed a pattern of co-variation and functional connectivity.
Uttal focusses on the limits of localizing cognitive processes in
the brain. One of his main arguments is that since the late 1990s,
research in cognitive neuroscience has forgotten about conventional psychophysical
studies based on behavioural observation. He believes that current
research focusses on the technological advances of brain imaging
techniques such as MRI and PET scans.
Thus, he further suggest that this research is dependent on the
assumptions of localization and hypothetical cognitive modules that use
such imaging techniques to pursuit these assumptions. Uttal's major
concern incorporates many controversies with the validly,
over-assumptions and strong inferences some of these images are trying
to illustrate. For instance, there is concern over the proper
utilization of control images in an experiment. Most of the cerebrum
is active during cognitive activity, therefore the amount of increased
activity in a region must be greater when compared to a controlled area.
In general, this may produce false or exaggerated findings and may
increase potential tendency to ignore regions of diminished activity
which may be crucial to the particular cognitive process being studied.
Moreover, Uttal believes that localization researchers tend to ignore
the complexity of the nervous system. Many regions in the brain are
physically interconnected in a nonlinear system, hence, Uttal believes
that behaviour is produced by a variety of system organizations.
Collaboration
The
two theories, modularity and distributive processing, can also be
combined. By operating simultaneously, these principles may interact
with each other in a collaborative effort to characterize the
functioning of the brain. Fodor himself, one of the major contributors
to the modularity theory, appears to have this sentiment. He noted that
modularity is a matter of degrees, and that the brain is modular to the
extent that it warrants studying it in regards to its functional
specialization.
Although there are areas in the brain that are more specialized for
cognitive processes than others, the nervous system also integrates and
connects the information produced in these regions. In fact, the
proposed distributive scheme of the functional cortical gradientes by J. Gonzalo
already tries to join both concepts modular and distributive: regional
heterogeneity should be a definitive acquisition (maximum specificity in
the projection paths and primary areas), but the rigid separation
between projection and association areas would be erased through the
continuous functions of gradient.
The collaboration between the two theories not only would provide
a more unified perception and understanding of the world but also make
available the ability to learn from it.