Search This Blog

Monday, November 11, 2024

Iron Age

From Wikipedia, the free encyclopedia

Although meteoric iron has been used for millennia in many regions, the beginning of the Iron Age is defined locally around the world by archaeological convention when the production of smelted iron (especially steel tools and weapons) replaces their bronze equivalents in common use.

In Anatolia and the Caucasus, or Southeast Europe, the Iron Age began during the late 2nd millennium BC (c. 1300 BC). In the Ancient Near East, this transition occurred simultaneously with the Late Bronze Age collapse, during the 12th century BC (1200–1100 BC). The technology soon spread throughout the Mediterranean Basin region and to South Asia between the 12th and 11th century BC. Its further spread to Central Asia, Eastern Europe, and Central Europe was somewhat delayed, and Northern Europe was not reached until about the start of the 5th century BC (500 BC).

The Iron Age in India is stated as beginning with the ironworking Painted Grey Ware culture, dating from the 15th century BC, through to the reign of Ashoka in the 3rd century BC. The term "Iron Age" in the archaeology of South, East, and Southeast Asia is more recent and less common than for Western Eurasia. Africa did not have a universal "Bronze Age", and many areas transitioned directly from stone to iron. Some archaeologists believe that iron metallurgy was developed in sub-Saharan Africa independently from Eurasia and neighbouring parts of Northeast Africa as early as 2000 BC.

The concept of the Iron Age ending with the beginning of the written historiographical record has not generalized well, as written language and steel use have developed at different times in different areas across the archaeological record. For instance, in China, written history started before iron smelting began, so the term is used infrequently for the archaeology of China. For the Ancient Near East, the establishment of the Achaemenid Empire c. 550 BC is used traditionally and still usually as an end date; later dates are considered historical according to the record by Herodotus despite considerable written records now being known from well back into the Bronze Age. In Central and Western Europe, the Roman conquests of the 1st century BC serve as marking the end of the Iron Age. The Germanic Iron Age of Scandinavia is considered to end c. AD 800, with the beginning of the Viking Age.

History of the concept

Map showing the extent of the Chernoles culture in Eastern Europe during the late Bronze Age.

The three-age method of Stone, Bronze, and Iron Ages was first used for the archaeology of Europe during the first half of the 19th century, and by the latter half of the 19th century, it had been extended to the archaeology of the Ancient Near East. Its name harks back to the mythological "Ages of Man" of Hesiod. As an archaeological era, it was first introduced to Scandinavia by Christian Jürgensen Thomsen during the 1830s. By the 1860s, it was embraced as a useful division of the "earliest history of mankind" in general and began to be applied in Assyriology. The development of the now-conventional periodization in the archaeology of the Ancient Near East was developed during the 1920s and 1930s.

Definition of "iron"

Willamette Meteorite, the sixth largest in the world, is an iron–nickel meteorite.

Meteoric iron, a natural iron–nickel alloy, was used by various ancient peoples thousands of years before the Iron Age. The earliest-known meteoric iron artifacts are nine small beads dated to 3200 BC, which were found in burials at Gerzeh in Lower Egypt, having been shaped by careful hammering.

The characteristic of an Iron Age culture is the mass production of tools and weapons made not just of found iron, but from smelted steel alloys with an added carbon content. Only with the capability of the production of carbon steel does ferrous metallurgy result in tools or weapons that are harder and lighter than bronze.

Smelted iron appears sporadically in the archeological record from the middle Bronze Age. Whilst terrestrial iron is abundant naturally, temperatures above 1,250 °C (2,280 °F) are required to smelt it, impractical to achieve with the technology available commonly until the end of the second millennium BC. In contrast, the components of bronze—tin with a melting point of 231.9 °C (449.4 °F) and copper with a relatively moderate melting point of 1,085 °C (1,985 °F)—were within the capabilities of Neolithic kilns, which date back to 6000 BC and were able to produce temperatures greater than 900 °C (1,650 °F).

In addition to specially designed furnaces, ancient iron production required the development of complex procedures for the removal of impurities, the regulation of the admixture of carbon, and the invention of hot-working to achieve a useful balance of hardness and strength in steel. The use of steel has also been regulated by the economics of the metallurgical advancements.

Chronology

Bronze AgeStone Age

Earliest evidence

The earliest tentative evidence for iron-making is a small number of iron fragments with the appropriate amounts of carbon admixture found in the Proto-Hittite layers at Kaman-Kalehöyük in modern-day Turkey, dated to 2200–2000 BC. Akanuma (2008) concludes that "The combination of carbon dating, archaeological context, and archaeometallurgical examination indicates that it is likely that the use of ironware made of steel had already begun in the third millennium BC in Central Anatolia". Souckova-Siegolová (2001) shows that iron implements were made in Central Anatolia in very limited quantities about 1800 BC and were in general use by elites, though not by commoners, during the New Hittite Empire (≈1400–1200 BC).

Similarly, recent archaeological remains of iron-working in the Ganges Valley in India have been dated tentatively to 1800 BC. Tewari (2003) concludes that "knowledge of iron smelting and manufacturing of iron artifacts was well known in the Eastern Vindhyas and iron had been in use in the Central Ganga Plain, at least from the early second millennium BC". By the Middle Bronze Age increasing numbers of smelted iron objects (distinguishable from meteoric iron by the lack of nickel in the product) appeared in the Middle East, Southeast Asia and South Asia.

African sites are revealing dates as early as 2000–1200 BC. However, some recent studies date the inception of iron metallurgy in Africa between 3000 and 2500 BC, with evidence existing for early iron metallurgy in parts of Nigeria, Cameroon, and Central Africa, from as early as around 2,000 BC. The Nok culture of Nigeria may have practiced iron smelting from as early as 1000 BC, while the nearby Djenné-Djenno culture of the Niger Valley in Mali shows evidence of iron production from c. 250 BC. Iron technology across much of sub-Saharan Africa has an African origin dating to before 2000 BC. These findings confirm the independent invention of iron smelting in sub-Saharan Africa.

Beginning

Copy of The Warrior of Hirschlanden (German: Krieger von Hirschlanden), a statue of a nude ithyphallic warrior made of sandstone, the oldest known Iron Age life-size anthropomorphic statue north of the Alps.

Modern archaeological evidence identifies the start of large-scale global iron production about 1200 BC, marking the end of the Bronze Age. The Iron Age in Europe is often considered as a part of the Bronze Age collapse in the ancient Near East.

Anthony Snodgrass suggests that a shortage of tin and trade disruptions in the Mediterranean about 1300 BC forced metalworkers to seek an alternative to bronze. Many bronze implements were recycled into weapons during that time, and more widespread use of iron resulted in improved steel-making technology and lower costs. When tin became readily available again, iron was cheaper, stronger and lighter, and forged iron implements superseded cast bronze tools permanently.

In Central and Western Europe, the Iron Age lasted from c. 800 BC to c. 1 BC, beginning in pre-Roman Iron Age Northern Europe in c. 600 BC, and reaching Northern Scandinavian Europe about c. 500 BC.

The Iron Age in the Ancient Near East is considered to last from c. 1200 BC (the Bronze Age collapse) to c. 550 BC (or 539 BC), roughly the beginning of historiography with Herodotus, marking the end of the proto-historical period.

In China, because writing was developed first, there is no recognizable prehistoric period characterized by ironworking, and the Bronze Age China transitions almost directly into the Qin dynasty of imperial China. "Iron Age" in the context of China is used sometimes for the transitional period of c. 900 BC to 100 BC during which ferrous metallurgy was present even if not dominant.

Maurya EmpireNorthern Black Polished WarePainted Gray WareViking AgeGermanic Iron AgeRoman Iron AgePre-Roman Iron AgeRoman ItalyEtruscan civilizationVillanovan cultureLate Period of ancient EgyptThird Intermediate Period of EgyptRoman EmpireLa Tène cultureHallstatt cultureClassical GreeceArchaic GreeceGreek Dark AgesAchaemenid Empire

Ancient Near East

The Iron Age in the Ancient Near East is believed to have begun after the discovery of iron smelting and smithing techniques in Anatolia, the Caucasus or Southeast Europe during the late 2nd millennium BC (c. 1300 BC). The earliest bloomery smelting of iron is found at Tell Hammeh, Jordan about 930 BC (determined from 14C dating).

The Early Iron Age in the Caucasus area is divided conventionally into two periods, Early Iron I, dated to about 1100 BC, and the Early Iron II phase from the tenth to ninth centuries BC. Many of the material culture traditions of the Late Bronze Age continued into the Early Iron Age. Thus, there is a sociocultural continuity during this transitional period.

In Iran, the earliest actual iron artifacts were unknown until the 9th century BC. For Iran, the best studied archaeological site during this time period is Teppe Hasanlu.

West Asia

In the Mesopotamian states of Sumer, Akkad and Assyria, the initial use of iron reaches far back, to perhaps 3000 BC. One of the earliest smelted iron artifacts known is a dagger with an iron blade found in a Hattic tomb in Anatolia, dating from 2500 BC. The widespread use of iron weapons which replaced bronze weapons rapidly disseminated throughout the Near East (North Africa, southwest Asia) by the beginning of the 1st millennium BC.

The development of iron smelting was once attributed to the Hittites of Anatolia during the Late Bronze Age. As part of the Late Bronze Age-Early Iron Age, the Bronze Age collapse saw the slow, comparatively continuous spread of iron-working technology in the region. It was long believed that the success of the Hittite Empire during the Late Bronze Age had been based on the advantages entailed by the "monopoly" on ironworking at the time. Accordingly, the invading Sea Peoples would have been responsible for spreading the knowledge through that region. The idea of such a "Hittite monopoly" has been examined more thoroughly and no longer represents a scholarly consensus. While there are some iron objects from Bronze Age Anatolia, the number is comparable to iron objects found in Egypt and other places of the same time period; and only a small number of these objects are weapons.

Early examples and distribution of non-precious metal finds
Date Crete Aegean Greece Cyprus Sub-totals Anatolia Totals
1300–1200 BC 5 2 9 0 16 33 49
Total Bronze Age 5 2 9 0 16 33 49
1200–1100 BC 1 2 8 26 37 N/A 37
1100–1000 BC 13 3 31 33 80 N/A 80
1000–900 BC 37+ 30 115 29 211 N/A 211
Total Iron Age
[Columns don't sum precisely]
51 35 163 88 328 N/A 328
Sassanid EmpireParthian EmpireSeleucid EmpireAchaemenid EmpireRamesside PeriodAncient Near East

Dates are approximate; consult particular article for details.

Egypt

Iron metal is singularly scarce in collections of Egyptian antiquities. Bronze remained the primary material there until the conquest by the Neo-Assyrian Empire in 671 BC. The explanation of this would seem to be that the relics are in most cases the paraphernalia of tombs, the funeral vessels and vases, and iron being considered an impure metal by the ancient Egyptians it was never used in their manufacture of these or for any religious purposes. It was attributed to Seth, the spirit of evil who according to Egyptian tradition governed the central deserts of Africa. In the Black Pyramid of Abusir, dating before 2000 BC, Gaston Maspero found some pieces of iron. In the funeral text of Pepi I, the metal is mentioned. A sword bearing the name of pharaoh Merneptah as well as a battle axe with an iron blade and gold-decorated bronze shaft were both found in the excavation of Ugarit. A dagger with an iron blade found in Tutankhamun's tomb, 13th century BC, was examined recently and found to be of meteoric origin.

Europe

Maiden Castle, Dorset, England. More than 2,000 Iron Age hillforts are known in Britain.

In Europe, the Iron Age is the last stage of prehistoric Europe and the first of the protohistoric periods, which initially means descriptions of a particular area by Greek and Roman writers. For much of Europe, the period came to an abrupt local end after conquest by the Romans, though ironworking remained the dominant technology until recent times. Elsewhere it may last until the early centuries AD, and either Christianization or a new conquest during the Migration Period.

Iron working was introduced to Europe during the late 11th century BC, probably from the Caucasus, and slowly spread northwards and westwards over the succeeding 500 years. The Iron Age did not start when iron first appeared in Europe but it began to replace bronze in the preparation of tools and weapons. It did not happen at the same time throughout Europe; local cultural developments played a role in the transition to the Iron Age. For example, the Iron Age of Prehistoric Ireland begins about 500 BC (when the Greek Iron Age had already ended) and finishes about 400 AD. The widespread use of the technology of iron was implemented in Europe simultaneously with Asia. The prehistoric Iron Age in Central Europe is divided into two periods based on the Hallstatt culture (early Iron Age) and La Tène (late Iron Age) cultures. Material cultures of Hallstatt and La Tène consist of 4 phases (A, B, C, D).

Culture Phase A Phase B Phase C Phase D
Hallstatt 1200–700 BC
Flat graves
1200–700 BC
Pottery made of polychrome
700–600 BC
Heavy iron and bronze swords
600–475 BC
Dagger swords, brooches, and ring ornaments, girdle mounts
La Tène 450–390 BC
S-shaped, spiral and round designs
390–300 BC
Iron swords, heavy knives, lanceheads
300–100 BC
Iron chains, iron swords, belts, heavy spearheads
100–15 BC
Iron reaping-hooks, saws, scythes and hammers
A sword of the Iron Age Cogotas II culture in Spain.

The Iron Age in Europe is characterized by an elaboration of designs of weapons, implements, and utensils. These are no longer cast but hammered into shape, and decoration is elaborate and curvilinear rather than simple rectilinear; the forms and character of the ornamentation of the northern European weapons resemble in some respects Roman arms, while in other respects they are peculiar and evidently representative of northern art.

Citânia de Briteiros, located in Guimarães, Portugal, is one of the examples of archaeological sites of the Iron Age. This settlement (fortified villages) covered an area of 3.8 hectares (9.4 acres), and served as a Celtiberian stronghold against Roman invasions. İt dates more than 2500 years back. The site was researched by Francisco Martins Sarmento starting from 1874. A number of amphoras (containers usually for wine or olive oil), coins, fragments of pottery, weapons, pieces of jewelry, as well as ruins of a bath and its pedra formosa (lit.'handsome stone') revealed here.

Asia

Central Asia

The Iron Age in Central Asia began when iron objects appear among the Indo-European Saka in present-day Xinjiang (China) between the 10th century BC and the 7th century BC, such as those found at the cemetery site of Chawuhukou.

The Pazyryk culture is an Iron Age archaeological culture (c. 6th to 3rd centuries BC) identified by excavated artifacts and mummified humans found in the Siberian permafrost in the Altay Mountains.

East Asia

Three Kingdoms of KoreaProto–Three Kingdoms of KoreaGojoseonKofun periodYayoi periodEarly Imperial ChinaImperial ChinaIron Age ChinaWarring States periodSpring and Autumn Period

Dates are approximate; consult particular article for details.

  •    Prehistoric (or Proto-historic) Iron Age   Historic Iron Age

In China, Chinese bronze inscriptions are found around 1200 BC, preceding the development of iron metallurgy, which was known by the 9th century BC. The large seal script is identified with a group of characters from a book entitled Shǐ Zhòu Piān (c. 800 BC). Therefore, in China prehistory had given way to history periodized by ruling dynasties by the start of iron use, so "Iron Age" is not used typically to describe a period of Chinese history. Iron metallurgy reached the Yangtse Valley toward the end of the 6th century BC. The few objects were found at Changsha and Nanjing. The mortuary evidence suggests that the initial use of iron in Lingnan belongs to the mid-to-late Warring States period (from about 350 BC). Important non-precious husi style metal finds include iron tools found at the tomb at Guwei-cun of the 4th century BC.

The techniques used in Lingnan are a combination of bivalve moulds of distinct southern tradition and the incorporation of piece mould technology from the Zhongyuan. The products of the combination of these two periods are bells, vessels, weapons and ornaments, and the sophisticated cast.

An Iron Age culture of the Tibetan Plateau has been associated tentatively with the Zhang Zhung culture described by early Tibetan writings.

In Japan, iron items, such as tools, weapons, and decorative objects, are postulated to have entered Japan during the late Yayoi period (c. 300 BC – 300 AD) or the succeeding Kofun period (c. 250–538 AD), most likely from the Korean Peninsula and China.

Distinguishing characteristics of the Yayoi period include the appearance of new pottery styles and the start of intensive rice agriculture in paddy fields. Yayoi culture flourished in a geographic area from southern Kyūshū to northern Honshū. The Kofun and the subsequent Asuka periods are sometimes referred to collectively as the Yamato period; The word kofun is Japanese for the type of burial mounds dating from that era.

Silla chest and neck armour from the National Museum of Korea in Seoul (3rd century AD).

Iron objects were introduced to the Korean peninsula through trade with chiefdoms and state-level societies in the Yellow Sea area during the 4th century BC, just at the end of the Warring States Period but prior to the beginning of the Western Han dynasty. Yoon proposes that iron was first introduced to chiefdoms located along North Korean river valleys that flow into the Yellow Sea such as the Cheongcheon and Taedong Rivers. Iron production quickly followed during the 2nd century BC, and iron implements came to be used by farmers by the 1st century in southern Korea. The earliest known cast-iron axes in southern Korea are found in the Geum River basin. The time that iron production begins is the same time that complex chiefdoms of Proto-historic Korea emerged. The complex chiefdoms were the precursors of early states such as Silla, Baekje, Goguryeo, and Gaya. Iron ingots were an important mortuary item and indicated the wealth or prestige of the deceased during this period.

South Asia

Maurya EmpireNanda EmpireShaishunaga dynastyHaryanka dynastyPradyota dynastyBrihadratha dynastyMahajanapadasJanapadaIron Age in IndiaMagadha

Dates are approximate; consult particular article for details.

  •    Prehistoric (or Proto-historic) Iron Age   Historic Iron Age

The earliest evidence of iron smelting predates the emergence of the Iron Age proper by several centuries. Iron was being used in Mundigak to manufacture some items in the 3rd millennium BC such as a small copper/bronze bell with an iron clapper, a copper/bronze rod with two iron decorative buttons, and a copper/bronze mirror handle with a decorative iron button. Artefacts including small knives and blades have been discovered in the Indian state of Telangana which have been dated between 2400 BC and 1800 BC. The history of metallurgy in the Indian subcontinent began prior to the 3rd millennium BC. Archaeological sites in India, such as Malhar, Dadupur, Raja Nala Ka Tila, Lahuradewa, Kosambi and Jhusi, Allahabad in present-day Uttar Pradesh show iron implements in the period 1800–1200 BC. As the evidence from the sites Raja Nala ka tila, Malhar suggest the use of Iron in c. 1800/1700 BC. The extensive use of iron smelting is from Malhar and its surrounding area. This site is assumed as the center for smelted bloomer iron to this area due to its location in the Karamnasa River and Ganga River. This site shows agricultural technology as iron implements sickles, nails, clamps, spearheads, etc., by at least c. 1500 BC. Archaeological excavations in Hyderabad show an Iron Age burial site.

The beginning of the 1st millennium BC saw extensive developments in iron metallurgy in India. Technological advancement and mastery of iron metallurgy were achieved during this period of peaceful settlements. One ironworking centre in East India has been dated to the first millennium BC. In Southern India (present-day Mysore) iron appeared as early as 12th to 11th centuries BC; these developments were too early for any significant close contact with the northwest of the country. The Indian Upanishads mention metallurgy. and the Indian Mauryan period saw advances in metallurgy. As early as 300 BC, certainly by 200 AD, high-quality steel was produced in southern India, by what would later be called the crucible technique. In this system, high-purity wrought iron, charcoal, and glass were mixed in a crucible and heated until the iron melted and absorbed the carbon.

The protohistoric Early Iron Age in Sri Lanka lasted from 1000 BC to 600 BC. Radiocarbon evidence has been collected from Anuradhapura and Aligala shelter in Sigiriya. The Anuradhapura settlement is recorded to extend 10 ha (25 acres) by 800 BC and grew to 50 ha (120 acres) by 700–600 BC to become a town. The skeletal remains of an Early Iron Age chief were excavated in Anaikoddai, Jaffna. The name "Ko Veta" is engraved in Brahmi script on a seal buried with the skeleton and is assigned by the excavators to the 3rd century BC. Ko, meaning "King" in Tamil, is comparable to such names as Ko Atan and Ko Putivira occurring in contemporary Brahmi inscriptions in south India. It is also speculated that Early Iron Age sites may exist in Kandarodai, Matota, Pilapitiya and Tissamaharama.

The earliest undisputed deciphered epigraphy found in the Indian subcontinent are the Edicts of Ashoka of the 3rd century BC, in the Brahmi script. Several inscriptions were thought to be pre-Ashokan by earlier scholars; these include the Piprahwa relic casket inscription, the Badli pillar inscription, the Bhattiprolu relic casket inscription, the Sohgaura copper plate inscription, the Mahasthangarh Brahmi inscription, the Eran coin legend, the Taxila coin legends, and the inscription on the silver coins of Sophytes. However, more recent scholars have dated them to later periods.

Southeast Asia

TarumanagaraBuni culturePrehistory of IndonesiaHistory of the Philippines (900-1521)History of the PhilippinesIgorot societySa Huỳnh cultureImperial VietnamÓc Eo cultureSa Huỳnh culture

Dates are approximate; consult particular article for details.

Lingling-o earrings from Luzon, Philippines

Archaeology in Thailand at sites Ban Don Ta Phet and Khao Sam Kaeo yielding metallic, stone, and glass artifacts stylistically associated with the Indian subcontinent suggest Indianization of Southeast Asia beginning in the 4th to 2nd centuries BC during the late Iron Age.

In Philippines and Vietnam, the Sa Huynh culture showed evidence of an extensive trade network. Sa Huynh beads were made from glass, carnelian, agate, olivine, zircon, gold and garnet; most of these materials were not local to the region and were most likely imported. Han-dynasty-style bronze mirrors were also found in Sa Huynh sites. Conversely, Sa Huynh produced ear ornaments have been found in archaeological sites in Central Thailand, as well as the Orchid Island.

Africa

Examples of African bloomery furnace types

Early evidence for iron technology in Sub-Saharan Africa can be found at sites such as KM2 and KM3 in northwest Tanzania and parts of Nigeria and the Central African Republic. Nubia was one of the relatively few places in Africa to have a sustained Bronze Age along with Egypt and much of the rest of North Africa.

Archaeometallurgical scientific knowledge and technological development originated in numerous centers of Africa; the centers of origin were located in West Africa, Central Africa, and East Africa; consequently, as these origin centers are located within inner Africa, these archaeometallurgical developments are thus native African technologies. Iron metallurgical development occurred 2631–2458 BC at Lejja, in Nigeria, 2136–1921 BC at Obui, in Central Africa Republic, 1895–1370 BC at Tchire Ouma 147, in Niger, and 1297–1051 BC at Dekpassanware, in Togo.

Very early copper and bronze working sites in Niger may date to as early as 1500 BC. There is also evidence of iron metallurgy in Termit, Niger from around this period. Nubia was a major manufacturer and exporter of iron after the expulsion of the Nubian dynasty from Egypt by the Assyrians in the 7th century BC.

Though there is some uncertainty, some archaeologists believe that iron metallurgy was developed independently in sub-Saharan West Africa, separately from Eurasia and neighboring parts of North and Northeast Africa.

Archaeological sites containing iron smelting furnaces and slag have also been excavated at sites in the Nsukka region of southeast Nigeria in what is now Igboland: dating to 2000 BC at the site of Lejja (Eze-Uzomaka 2009) and to 750 BC and at the site of Opi (Holl 2009). The site of Gbabiri (in the Central African Republic) has yielded evidence of iron metallurgy, from a reduction furnace and blacksmith workshop; with earliest dates of 896–773 BC and 907–796 BC, respectively. Similarly, smelting in bloomery-type furnaces appear in the Nok culture of central Nigeria by about 550 BC and possibly a few centuries earlier.

Iron and copper working in Sub-Saharan Africa spread south and east from Central Africa in conjunction with the Bantu expansion, from the Cameroon region to the African Great Lakes in the 3rd century BC, reaching the Cape around 400 AD. However, iron working may have been practiced in central Africa as early as the 3rd millennium BC. Instances of carbon steel based on complex preheating principles were found to be in production around the 1st century CE in northwest Tanzania.

Typical bloomery iron production operational sequence starting with acquiring raw materials through smelting and smithing
Bantu expansionNok cultureSub-Saharan AfricaAfrican Iron AgeAksumite EmpireKingdom of KushThird Intermediate Period

Dates are approximate; consult particular article for details

  •    Prehistoric (or Proto-historic) Iron Age   Historic Iron Age

Sunday, November 10, 2024

Pattern recognition (psychology)

From Wikipedia, the free encyclopedia

In psychology and cognitive neuroscience, pattern recognition is a cognitive process that matches information from a stimulus with information retrieved from memory.

Pattern recognition occurs when information from the environment is received and entered into short-term memory, causing automatic activation of a specific content of long-term memory. An example of this is learning the alphabet in order. When a carer repeats "A, B, C" multiple times to a child, the child, using pattern recognition, says "C" after hearing "A, B" in order. Recognizing patterns allows anticipation of what is to come. Making the connection between memories and information perceived is a step in pattern recognition called identification. Pattern recognition requires repetition of experience. Semantic memory, which is used implicitly and subconsciously, is the main type of memory involved in recognition.

Pattern recognition is crucial not only to humans, but also to other animals. Even koalas, which possess less-developed thinking abilities, use pattern recognition to find and consume eucalyptus leaves. The human brain has developed more, but holds similarities to the brains of birds and lower mammals. The development of neural networks in the outer layer of the brain in humans has allowed for better processing of visual and auditory patterns. Spatial positioning in the environment, remembering findings, and detecting hazards and resources to increase chances of survival are examples of the application of pattern recognition for humans and animals.

There are six main theories of pattern recognition: template matching, prototype-matching, feature analysis, recognition-by-components theory, bottom-up and top-down processing, and Fourier analysis. The application of these theories in everyday life is not mutually exclusive. Pattern recognition allows us to read words, understand language, recognize friends, and even appreciate music. Each of the theories applies to various activities and domains where pattern recognition is observed. Facial, music and language recognition, and seriation are a few of such domains. Facial recognition and seriation occur through encoding visual patterns, while music and language recognition use the encoding of auditory patterns.

Theories

Template matching

Template matching theory describes the most basic approach to human pattern recognition. It is a theory that assumes every perceived object is stored as a "template" into long-term memory. Incoming information is compared to these templates to find an exact match. In other words, all sensory input is compared to multiple representations of an object to form one single conceptual understanding. The theory defines perception as a fundamentally recognition-based process. It assumes that everything we see, we understand only through past exposure, which then informs our future perception of the external world. For example, A, A, and A are all recognized as the letter A, but not B. This viewpoint is limited, however, in explaining how new experiences can be understood without being compared to an internal memory template.

Prototype matching

Unlike the exact, one-to-one, template matching theory, prototype matching instead compares incoming sensory input to one average prototype. This theory proposes that exposure to a series of related stimuli leads to the creation of a "typical" prototype based on their shared features. It reduces the number of stored templates by standardizing them into a single representation. The prototype supports perceptual flexibility, because unlike in template matching, it allows for variability in the recognition of novel stimuli. For instance, if a child had never seen a lawn chair before, they would still be able to recognize it as a chair because of their understanding of its essential characteristics as having four legs and a seat. This idea, however, limits the conceptualization of objects that cannot necessarily be "averaged" into one, like types of canines, for instance. Even though dogs, wolves, and foxes are all typically furry, four-legged, moderately sized animals with ears and a tail, they are not all the same, and thus cannot be strictly perceived with respect to the prototype matching theory.

Multiple discrimination scaling

Template and feature analysis approaches to recognition of objects (and situations) have been merged / reconciled / overtaken by multiple discrimination theory. This states that the amounts in a test stimulus of each salient feature of a template are recognized in any perceptual judgment as being at a distance in the universal unit of 50% discrimination (the objective performance 'JND') from the amount of that feature in the template.

Recognition–by–components theory

Image showing the breakdown of common geometric shapes (geons)

Similar to feature–detection theory, recognition by components (RBC) focuses on the bottom-up features of the stimuli being processed. First proposed by Irving Biederman (1987), this theory states that humans recognize objects by breaking them down into their basic 3D geometric shapes called geons (i.e., cylinders, cubes, cones, etc.). An example is how we break down a common item like a coffee cup: we recognize the hollow cylinder that holds the liquid and a curved handle off the side that allows us to hold it. Even though not every coffee cup is exactly the same, these basic components help us recognize the consistency across examples (or pattern). RBC suggests that there are fewer than 36 unique geons that when combined can form a virtually unlimited number of objects. To parse and dissect an object, RBC proposes we attend to two specific features: edges and concavities. Edges enable the observer to maintain a consistent representation of the object regardless of the viewing angle and lighting conditions. Concavities are where two edges meet and enable the observer to perceive where one geon ends and another begins.

The RBC principles of visual object recognition can be applied to auditory language recognition as well. In place of geons, language researchers propose that spoken language can be broken down into basic components called phonemes. For example, there are 44 phonemes in the English language.

Top-down and bottom-up processing

Top-down processing

Top-down processing refers to the use of background information in pattern recognition. It always begins with a person's previous knowledge, and makes predictions due to this already acquired knowledge. Psychologist Richard Gregory estimated that about 90% of the information is lost between the time it takes to go from the eye to the brain, which is why the brain must guess what the person sees based on past experiences. In other words, we construct our perception of reality, and these perceptions are hypotheses or propositions based on past experiences and stored information. The formation of incorrect propositions will lead to errors of perception such as visual illusions. Given a paragraph written with difficult handwriting, it is easier to understand what the writer wants to convey if one reads the whole paragraph rather than reading the words in separate terms. The brain may be able to perceive and understand the gist of the paragraph due to the context supplied by the surrounding words.

Bottom-up processing

Bottom-up processing is also known as data-driven processing, because it originates with the stimulation of the sensory receptors. Psychologist James Gibson opposed the top-down model and argued that perception is direct, and not subject to hypothesis testing as Gregory proposed. He stated that sensation is perception and there is no need for extra interpretation, as there is enough information in our environment to make sense of the world in a direct way. His theory is sometimes known as the "ecological theory" because of the claim that perception can be explained solely in terms of the environment. An example of bottom up-processing involves presenting a flower at the center of a person's field. The sight of the flower and all the information about the stimulus are carried from the retina to the visual cortex in the brain. The signal travels in one direction.

Seriation

A simple seriation task involving arranging shapes by size

In psychologist Jean Piaget's theory of cognitive development, the third stage is called the Concrete Operational State. It is during this stage that the abstract principle of thinking called "seriation" is naturally developed in a child. Seriation is the ability to arrange items in a logical order along a quantitative dimension such as length, weight, age, etc. It is a general cognitive skill which is not fully mastered until after the nursery years. To seriate means to understand that objects can be ordered along a dimension, and to effectively do so, the child needs to be able to answer the question "What comes next?" Seriation skills also help to develop problem-solving skills, which are useful in recognizing and completing patterning tasks.

Piaget's work on seriation

Piaget studied the development of seriation along with Szeminska in an experiment where they used rods of varying lengths to test children's skills. They found that there were three distinct stages of development of the skill. In the first stage, children around the age of 4 could not arrange the first ten rods in order. They could make smaller groups of 2–4, but could not put all the elements together. In the second stage where the children were 5–6 years of age, they could succeed in the seriation task with the first ten rods through the process of trial and error. They could insert the other set of rods into order through trial and error. In the third stage, the 7-8-year-old children could arrange all the rods in order without much trial and error. The children used the systematic method of first looking for the smallest rod first and the smallest among the rest.

Development of problem-solving skills

To develop the skill of seriation, which then helps advance problem-solving skills, children should be provided with opportunities to arrange things in order using the appropriate language, such as "big" and "bigger" when working with size relationships. They should also be given the chance to arrange objects in order based on the texture, sound, flavor and color. Along with specific tasks of seriation, children should be given the chance to compare the different materials and toys they use during play. Through activities like these, the true understanding of characteristics of objects will develop. To aid them at a young age, the differences between the objects should be obvious. Lastly, a more complicated task of arranging two different sets of objects and seeing the relationship between the two different sets should also be provided. A common example of this is having children attempt to fit saucepan lids to saucepans of different sizes, or fitting together different sizes of nuts and bolts.

Application of seriation in schools

To help build up math skills in children, teachers and parents can help them learn seriation and patterning. Young children who understand seriation can put numbers in order from lowest to highest. Eventually, they will come to understand that 6 is higher than 5, and 20 is higher than 10. Similarly, having children copy patterns or create patterns of their own, like ABAB patterns, is a great way to help them recognize order and prepare for later math skills, such as multiplication. Child care providers can begin exposing children to patterns at a very young age by having them make groups and count the total number of objects.

Facial pattern recognition

Recognizing faces is one of the most common forms of pattern recognition. Humans are extremely effective at remembering faces, but this ease and automaticity belies a very challenging problem. All faces are physically similar. Faces have two eyes, one mouth, and one nose all in predictable locations, yet humans can recognize a face from several different angles and in various lighting conditions.

Neuroscientists posit that recognizing faces takes place in three phases. The first phase starts with visually focusing on the physical features. The facial recognition system then needs to reconstruct the identity of the person from previous experiences. This provides us with the signal that this might be a person we know. The final phase of recognition completes when the face elicits the name of the person.

Although humans are great at recognizing faces under normal viewing angles, upside-down faces are tremendously difficult to recognize. This demonstrates not only the challenges of facial recognition but also how humans have specialized procedures and capacities for recognizing faces under normal upright viewing conditions.

Neural mechanisms

Brain animation highlighting the fusiform face area, thought to be where facial processing and recognition takes place

Scientists agree that there is a certain area in the brain specifically devoted to processing faces. This structure is called the fusiform gyrus, and brain imaging studies have shown that it becomes highly active when a subject is viewing a face.

Several case studies have reported that patients with lesions or tissue damage localized to this area have tremendous difficulty recognizing faces, even their own. Although most of this research is circumstantial, a study at Stanford University provided conclusive evidence for the fusiform gyrus' role in facial recognition. In a unique case study, researchers were able to send direct signals to a patient's fusiform gyrus. The patient reported that the faces of the doctors and nurses changed and morphed in front of him during this electrical stimulation. Researchers agree this demonstrates a convincing causal link between this neural structure and the human ability to recognize faces.

Facial recognition development

Although in adults, facial recognition is fast and automatic, children do not reach adult levels of performance (in laboratory tasks) until adolescence. Two general theories have been put forth to explain how facial recognition normally develops. The first, general cognitive development theory, proposes that the perceptual ability to encode faces is fully developed early in childhood, and that the continued improvement of facial recognition into adulthood is attributed to other general factors. These general factors include improved attentional focus, deliberate task strategies, and metacognition. Research supports the argument that these other general factors improve dramatically into adulthood. Face-specific perceptual development theory argues that the improved facial recognition between children and adults is due to a precise development of facial perception. The cause for this continuing development is proposed to be an ongoing experience with faces.

Developmental issues

Several developmental issues manifest as a decreased capacity for facial recognition. Using what is known about the role of the fusiform gyrus, research has shown that impaired social development along the autism spectrum is accompanied by a behavioral marker where these individuals tend to look away from faces, and a neurological marker characterized by decreased neural activity in the fusiform gyrus. Similarly, those with developmental prosopagnosia (DP) struggle with facial recognition to the extent they are often unable to identify even their own faces. Many studies report that around 2% of the world's population have developmental prosopagnosia, and that individuals with DP have a family history of the trait. Individuals with DP are behaviorally indistinguishable from those with physical damage or lesions on the fusiform gyrus, again implicating its importance to facial recognition. Despite those with DP or neurological damage, there remains a large variability in facial recognition ability in the total population. It is unknown what accounts for the differences in facial recognition ability, whether it is a biological or environmental disposition. Recent research analyzing identical and fraternal twins showed that facial recognition was significantly higher correlated in identical twins, suggesting a strong genetic component to individual differences in facial recognition ability.

Language development

Pattern recognition in language acquisition

Research from Frost et al., 2013 reveals that infant language acquisition is linked to cognitive pattern recognition. Unlike classical nativist and behavioral theories of language development, scientists now believe that language is a learned skill. Studies at the Hebrew University and the University of Sydney both show a strong correlation between the ability to identify visual patterns and to learn a new language. Children with high shape recognition showed better grammar knowledge, even when controlling for the effects of intelligence and memory capacity. This is supported by the theory that language learning is based on statistical learning, the process by which infants perceive common combinations of sounds and words in language and use them to inform future speech production.

Phonological development

The first step in infant language acquisition is to decipher between the most basic sound units of their native language. This includes every consonant, every short and long vowel sound, and any additional letter combinations like "th" and "ph" in English. These units, called phonemes, are detected through exposure and pattern recognition. Infants use their "innate feature detector" capabilities to distinguish between the sounds of words. They split them into phonemes through a mechanism of categorical perception. Then they extract statistical information by recognizing which combinations of sounds are most likely to occur together, like "qu" or "h" plus a vowel. In this way, their ability to learn words is based directly on the accuracy of their earlier phonetic patterning.

Grammar development

The transition from phonemic differentiation into higher-order word production is only the first step in the hierarchical acquisition of language. Pattern recognition is furthermore utilized in the detection of prosody cues, the stress and intonation patterns among words. Then it is applied to sentence structure and the understanding of typical clause boundaries. This entire process is reflected in reading as well. First, a child recognizes patterns of individual letters, then words, then groups of words together, then paragraphs, and finally entire chapters in books. Learning to read and learning to speak a language are based on the "stepwise refinement of patterns" in perceptual pattern recognition.

Music pattern recognition

Music provides deep and emotional experiences for the listener. These experiences become contents in long-term memory, and every time we hear the same tunes, those contents are activated. Recognizing the content by the pattern of the music affects our emotion. The mechanism that forms the pattern recognition of music and the experience has been studied by multiple researchers. The sensation felt when listening to our favorite music is evident by the dilation of the pupils, the increase in pulse and blood pressure, the streaming of blood to the leg muscles, and the activation of the cerebellum, the brain region associated with physical movement. While retrieving the memory of a tune demonstrates general recognition of musical pattern, pattern recognition also occurs while listening to a tune for the first time. The recurring nature of the metre allows the listener to follow a tune, recognize the metre, expect its upcoming occurrence, and figure the rhythm. The excitement of following a familiar music pattern happens when the pattern breaks and becomes unpredictable. This following and breaking of a pattern creates a problem-solving opportunity for the mind that form the experience. Psychologist Daniel Levitin argues that the repetitions, melodic nature and organization of this music create meaning for the brain. The brain stores information in an arrangement of neurons which retrieve the same information when activated by the environment. By constantly referencing information and additional stimulation from the environment, the brain constructs musical features into a perceptual whole.

The medial prefrontal cortex – one of the last areas affected by Alzheimer's disease – is the region activated by music.

Cognitive mechanisms

To understand music pattern recognition, we need to understand the underlying cognitive systems that each handle a part of this process. Various activities are at work in this recognition of a piece of music and its patterns. Researchers have begun to unveil the reasons behind the stimulated reactions to music. Montreal-based researchers asked ten volunteers who got "chills" listening to music to listen to their favorite songs while their brain activity was being monitored. The results show the significant role of the nucleus accumbens (NAcc) region – involved with cognitive processes such as motivation, reward, addiction, etc. – creating the neural arrangements that make up the experience. A sense of reward prediction is created by anticipation before the climax of the tune, which comes to a sense of resolution when the climax is reached. The longer the listener is denied the expected pattern, the greater the emotional arousal when the pattern returns. Musicologist Leonard Meyer used fifty measures of Beethoven's 5th movement of the String Quartet in C-sharp minor, Op. 131 to examine this notion. The stronger this experience is, the more vivid memory it will create and store. This strength affects the speed and accuracy of retrieval and recognition of the musical pattern. The brain not only recognizes specific tunes, it distinguishes standard acoustic features, speech and music.

MIT researchers conducted a study to examine this notion. The results showed six neural clusters in the auditory cortex responding to the sounds. Four were triggered when hearing standard acoustic features, one specifically responded to speech, and the last exclusively responded to music. Researchers who studied the correlation between temporal evolution of timbral, tonal and rhythmic features of music, came to the conclusion that music engages the brain regions connected to motor actions, emotions and creativity. The research indicates that the whole brain "lights up" when listening to music. This amount of activity boosts memory preservation, hence pattern recognition.

Recognizing patterns of music is different for a musician and a listener. Although a musician may play the same notes every time, the details of the frequency will always be different. The listener will recognize the musical pattern and their types despite the variations. These musical types are conceptual and learned, meaning they might vary culturally. While listeners are involved with recognizing (implicit) musical material, musicians are involved with recalling them (explicit).

A UCLA study found that when watching or hearing music being played, neurons associated with the muscles needed for playing the instrument fire. Mirror neurons light up when musicians and non-musicians listen to a piece.

Developmental issues

Pattern recognition of music can build and strengthen other skills, such as musical synchrony and attentional performance and musical notation and brain engagement. Even a few years of musical training enhances memory and attention levels. Scientists at University of Newcastle conducted a study on patients with severe acquired brain injuries (ABIs) and healthy participants, using popular music to examine music-evoked autobiographical memories (MEAMs). The participants were asked to record their familiarity with the songs, whether they liked them and what memories they evoked. The results showed that the ABI patients had the highest MEAMs, and all the participants had MEAMs of a person, people or life period that were generally positive. The participants completed the task by utilizing pattern recognition skills. Memory evocation caused the songs to sound more familiar and well-liked. This research can be beneficial to rehabilitating patients of autobiographical amnesia who do not have fundamental deficiency in autobiographical recall memory and intact pitch perception.

In a study at University of California, Davis mapped the brain of participants while they listened to music. The results showed links between brain regions to autobiographical memories and emotions activated by familiar music. This study can explain the strong response of patients with Alzheimer's disease to music. This research can help such patients with pattern recognition-enhancing tasks.

False pattern recognition

Whale, submarine or sheep?

The human tendency to see patterns that do not actually exist is called apophenia. Examples include the Man in the Moon, faces or figures in shadows, in clouds, and in patterns with no deliberate design, such as the swirls on a baked confection, and the perception of causal relationships between events which are, in fact, unrelated. Apophenia figures prominently in conspiracy theories, gambling, misinterpretation of statistics and scientific data, and some kinds of religious and paranormal experiences. Misperception of patterns in random data is called pareidolia. Recent researches in neurosciences and cognitive sciences suggest to understand 'false pattern recognition', in the paradigm of predictive coding.

Shale gas

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Shale_gas...