Search This Blog

Wednesday, November 2, 2022

Mining

From Wikipedia, the free encyclopedia
 
Person stabbing a yellow mineral block
Mining of sulfur from a deposit at the edge of Ijen's crater lake

Mining is the extraction of valuable minerals or other geological materials from the Earth, usually from an ore body, lode, vein, seam, reef, or placer deposit. The exploitation of these deposits for raw material is based on the economic viability of investing in the equipment, labor, and energy required to extract, refine and transport the materials found at the mine to manufacturers who can use the material.

Ores recovered by mining include metals, coal, oil shale, gemstones, limestone, chalk, dimension stone, rock salt, potash, gravel, and clay. Mining is required to obtain most materials that cannot be grown through agricultural processes, or feasibly created artificially in a laboratory or factory. Mining in a wider sense includes extraction of any non-renewable resource such as petroleum, natural gas, or even water. Modern mining processes involve prospecting for ore bodies, analysis of the profit potential of a proposed mine, extraction of the desired materials, and final reclamation or restoration of the land after the mine is closed.

Mining operations can create a negative environmental impact, both during the mining activity and after the mine has closed. Hence, most of the world's nations have passed regulations to decrease the impact; however, the outsized role of mining in generating business for often rural, remote or economically depressed communities means that governments may fail to fully enforce such regulations. Work safety has long been a concern as well, and where enforcing modern practices have significantly improved safety in mines. Moreover, unregulated or poorly regulated mining, especially in developing economies, frequently contributes to local human rights violations and resource conflicts.

History

Prehistory

Since the beginning of civilization, people have used stone, clay and, later, metals found close to the Earth's surface. These were used to make early tools and weapons; for example, high quality flint found in northern France, southern England and Poland was used to create flint tools. Flint mines have been found in chalk areas where seams of the stone were followed underground by shafts and galleries. The mines at Grimes Graves and Krzemionki are especially famous, and like most other flint mines, are Neolithic in origin (c. 4000–3000 BC). Other hard rocks mined or collected for axes included the greenstone of the Langdale axe industry based in the English Lake District. The oldest-known mine on archaeological record is the Ngwenya Mine in Eswatini (Swaziland), which radiocarbon dating shows to be about 43,000 years old. At this site Paleolithic humans mined hematite to make the red pigment ochre. Mines of a similar age in Hungary are believed to be sites where Neanderthals may have mined flint for weapons and tools.

Ancient Egypt

Malachite

Ancient Egyptians mined malachite at Maadi. At first, Egyptians used the bright green malachite stones for ornamentations and pottery. Later, between 2613 and 2494 BC, large building projects required expeditions abroad to the area of Wadi Maghareh in order to secure minerals and other resources not available in Egypt itself. Quarries for turquoise and copper were also found at Wadi Hammamat, Tura, Aswan and various other Nubian sites on the Sinai Peninsula and at Timna.

Mining in Egypt occurred in the earliest dynasties. The gold mines of Nubia were among the largest and most extensive of any in Ancient Egypt. These mines are described by the Greek author Diodorus Siculus, who mentions fire-setting as one method used to break down the hard rock holding the gold. One of the complexes is shown in one of the earliest known maps. The miners crushed the ore and ground it to a fine powder before washing the powder for the gold dust.

Ancient Greece and Rome

Ancient Roman development of the Dolaucothi Gold Mines, Wales

Mining in Europe has a very long history. Examples include the silver mines of Laurium, which helped support the Greek city state of Athens. Although they had over 20,000 slaves working them, their technology was essentially identical to their Bronze Age predecessors. At other mines, such as on the island of Thassos, marble was quarried by the Parians after they arrived in the 7th century BC. The marble was shipped away and was later found by archaeologists to have been used in buildings including the tomb of Amphipolis. Philip II of Macedon, the father of Alexander the Great, captured the gold mines of Mount Pangeo in 357 BC to fund his military campaigns. He also captured gold mines in Thrace for minting coinage, eventually producing 26 tons per year.

However, it was the Romans who developed large-scale mining methods, especially the use of large volumes of water brought to the minehead by numerous aqueducts. The water was used for a variety of purposes, including removing overburden and rock debris, called hydraulic mining, as well as washing comminuted, or crushed, ores and driving simple machinery.

The Romans used hydraulic mining methods on a large scale to prospect for the veins of ore, especially using a now-obsolete form of mining known as hushing. They built numerous aqueducts to supply water to the minehead, where the water was stored in large reservoirs and tanks. When a full tank was opened, the flood of water sluiced away the overburden to expose the bedrock underneath and any gold-bearing veins. The rock was then worked by fire-setting to heat the rock, which would be quenched with a stream of water. The resulting thermal shock cracked the rock, enabling it to be removed by further streams of water from the overhead tanks. The Roman miners used similar methods to work cassiterite deposits in Cornwall and lead ore in the Pennines.

Sluicing methods were developed by the Romans in Spain in 25 AD to exploit large alluvial gold deposits, the largest site being at Las Medulas, where seven long aqueducts tapped local rivers and sluiced the deposits. The Romans also exploited the silver present in the argentiferous galena in the mines of Cartagena (Cartago Nova), Linares (Castulo), Plasenzuela and Azuaga, among many others. Spain was one of the most important mining regions, but all regions of the Roman Empire were exploited. In Great Britain the natives had mined minerals for millennia, but after the Roman conquest, the scale of the operations increased dramatically, as the Romans needed Britannia's resources, especially gold, silver, tin, and lead.

Roman techniques were not limited to surface mining. They followed the ore veins underground once opencast mining was no longer feasible. At Dolaucothi they stoped out the veins and drove adits through bare rock to drain the stopes. The same adits were also used to ventilate the workings, especially important when fire-setting was used. At other parts of the site, they penetrated the water table and dewatered the mines using several kinds of machines, especially reverse overshot water-wheels. These were used extensively in the copper mines at Rio Tinto in Spain, where one sequence comprised 16 such wheels arranged in pairs, and lifting water about 24 metres (79 ft). They were worked as treadmills with miners standing on the top slats. Many examples of such devices have been found in old Roman mines and some examples are now preserved in the British Museum and the National Museum of Wales.

Medieval Europe

Agricola, author of De Re Metallica
 
Gallery, 12th to 13th century, Germany

Mining as an industry underwent dramatic changes in medieval Europe. The mining industry in the early Middle Ages was mainly focused on the extraction of copper and iron. Other precious metals were also used, mainly for gilding or coinage. Initially, many metals were obtained through open-pit mining, and ore was primarily extracted from shallow depths, rather than through deep mine shafts. Around the 14th century, the growing use of weapons, armour, stirrups, and horseshoes greatly increased the demand for iron. Medieval knights, for example, were often laden with up to 100 pounds (45 kg) of plate or chain link armour in addition to swords, lances and other weapons. The overwhelming dependency on iron for military purposes spurred iron production and extraction processes.

The silver crisis of 1465 occurred when all mines had reached depths at which the shafts could no longer be pumped dry with the available technology. Although an increased use of banknotes, credit and copper coins during this period did decrease the value of, and dependence on, precious metals, gold and silver still remained vital to the story of medieval mining.

Due to differences in the social structure of society, the increasing extraction of mineral deposits spread from central Europe to England in the mid-sixteenth century. On the continent, mineral deposits belonged to the crown, and this regalian right was stoutly maintained. But in England, royal mining rights were restricted to gold and silver (of which England had virtually no deposits) by a judicial decision of 1568 and a law in 1688. England had iron, zinc, copper, lead, and tin ores. Landlords who owned the base metals and coal under their estates then had a strong inducement to extract these metals or to lease the deposits and collect royalties from mine operators. English, German, and Dutch capital combined to finance extraction and refining. Hundreds of German technicians and skilled workers were brought over; in 1642 a colony of 4,000 foreigners was mining and smelting copper at Keswick in the northwestern mountains.

Use of water power in the form of water mills was extensive. The water mills were employed in crushing ore, raising ore from shafts, and ventilating galleries by powering giant bellows. Black powder was first used in mining in Selmecbánya, Kingdom of Hungary (now Banská Štiavnica, Slovakia) in 1627. Black powder allowed blasting of rock and earth to loosen and reveal ore veins. Blasting was much faster than fire-setting and allowed the mining of previously impenetrable metals and ores. In 1762, the world's first mining academy was established in the same town there.

The widespread adoption of agricultural innovations such as the iron plowshare, as well as the growing use of metal as a building material, was also a driving force in the tremendous growth of the iron industry during this period. Inventions like the arrastra were often used by the Spanish to pulverize ore after being mined. This device was powered by animals and used the same principles used for grain threshing.

Much of the knowledge of medieval mining techniques comes from books such as Biringuccio's De la pirotechnia and probably most importantly from Georg Agricola's De re metallica (1556). These books detail many different mining methods used in German and Saxon mines. A prime issue in medieval mines, which Agricola explains in detail, was the removal of water from mining shafts. As miners dug deeper to access new veins, flooding became a very real obstacle. The mining industry became dramatically more efficient and prosperous with the invention of mechanically- and animal-driven pumps.

Africa

Iron metallurgy in Africa dates back over four thousand years. Gold became an important commodity for Africa during the trans-Saharan gold trade from the 7th century to the 14th century. Gold was often traded to Mediterranean economies that demanded gold and could supply salt, even though much of Africa was abundant with salt due to the mines and resources in the Sahara desert. The trading of gold for salt was mostly used to promote trade between the different economies. Since the 19th century, gold and diamond mining in Southern Africa has had major political and economic impacts. The Democratic Republic of Congo is the largest producer of diamonds in Africa, with an estimated 12 million carats in 2019. Other types of mining reserves in Africa include cobalt, bauxite, iron ore, coal, and copper.

Oceania

Gold and coal mining started in Australia and New Zealand in the 19th century. Nickel has become important in the economy of New Caledonia.

In Fiji, in 1934, the Emperor Gold Mining Company Ltd. established operations at Vatukoula, followed in 1935 by the Loloma Gold Mines, N.L., and then by Fiji Mines Development Ltd. (aka Dolphin Mines Ltd.). These developments ushered in a “mining boom”, with gold production rising more than a hundred-fold, from 931.4 oz in 1934 to 107,788.5 oz in 1939, an order of magnitude then comparable to the combined output of New Zealand and Australia's eastern states.

Americas

Lead mining in the upper Mississippi River region of the U.S., 1865

During prehistoric times, early Americans mined large amounts of copper along Lake Superior's Keweenaw Peninsula and in nearby Isle Royale; metallic copper was still present near the surface in colonial times. Indigenous peoples used Lake Superior copper from at least 5,000 years ago; copper tools, arrowheads, and other artifacts that were part of an extensive native trade-network have been discovered. In addition, obsidian, flint, and other minerals were mined, worked, and traded. Early French explorers who encountered the sites made no use of the metals due to the difficulties of transporting them, but the copper was eventually traded throughout the continent along major river routes.

Miners at the Tamarack Mine in Copper Country, Michigan, U.S. in 1905.

In the early colonial history of the Americas, "native gold and silver was quickly expropriated and sent back to Spain in fleets of gold- and silver-laden galleons", the gold and silver originating mostly from mines in Central and South America. Turquoise dated at 700 AD was mined in pre-Columbian America; in the Cerillos Mining District in New Mexico, an estimate of "about 15,000 tons of rock had been removed from Mt. Chalchihuitl using stone tools before 1700."

A hose sprays water into a placer mine in Fairplay, Colorado, to assist with mining operations in the early 1900s. (Park County Local History Digital Archive)

In 1727 Louis Denys (Denis) (1675–1741), sieur de La Ronde – brother of Simon-Pierre Denys de Bonaventure and the son-in-law of René Chartier – took command of Fort La Pointe at Chequamegon Bay; where natives informed him of an island of copper. La Ronde obtained permission from the French crown to operate mines in 1733, becoming "the first practical miner on Lake Superior"; seven years later, mining was halted by an outbreak between Sioux and Chippewa tribes.

Mining in the United States became widespread in the 19th century, and the United States Congress passed the General Mining Act of 1872 to encourage mining of federal lands. As with the California Gold Rush in the mid-19th century, mining for minerals and precious metals, along with ranching, became a driving factor in the U.S. Westward Expansion to the Pacific coast. With the exploration of the West, mining camps sprang up and "expressed a distinctive spirit, an enduring legacy to the new nation"; Gold Rushers would experience the same problems as the Land Rushers of the transient West that preceded them. Aided by railroads, many people traveled West for work opportunities in mining. Western cities such as Denver and Sacramento originated as mining towns.

When new areas were explored, it was usually the gold (placer and then lode) and then silver that were taken into possession and extracted first. Other metals would often wait for railroads or canals, as coarse gold dust and nuggets do not require smelting and are easy to identify and transport.

Modernity

View showing miners' clothes suspended by pulleys, also wash basins and ventilation system, Kirkland Lake, Ontario, 1936.

In the early 20th century, the gold and silver rush to the western United States also stimulated mining for coal as well as base metals such as copper, lead, and iron. Areas in modern Montana, Utah, Arizona, and later Alaska became predominate suppliers of copper to the world, which was increasingly demanding copper for electrical and households goods. Canada's mining industry grew more slowly than did the United States' due to limitations in transportation, capital, and U.S. competition; Ontario was the major producer of the early 20th century with nickel, copper, and gold.

Meanwhile, Australia experienced the Australian gold rushes and by the 1850s was producing 40% of the world's gold, followed by the establishment of large mines such as the Mount Morgan Mine, which ran for nearly a hundred years, Broken Hill ore deposit (one of the largest zinc-lead ore deposits), and the iron ore mines at Iron Knob. After declines in production, another boom in mining occurred in the 1960s. Now, in the early 21st century, Australia remains a major world mineral producer.

As the 21st century begins, a globalized mining industry of large multinational corporations has arisen. Peak minerals and environmental impacts have also become a concern. Different elements, particularly rare earth minerals, have begun to increase in demand as a result of new technologies.

Mine development and life cycle

The process of mining from discovery of an ore body through extraction of minerals and finally to returning the land to its natural state consists of several distinct steps. The first is discovery of the ore body, which is carried out through prospecting or exploration to find and then define the extent, location and value of the ore body. This leads to a mathematical resource estimation to estimate the size and grade of the deposit.

This estimation is used to conduct a pre-feasibility study to determine the theoretical economics of the ore deposit. This identifies, early on, whether further investment in estimation and engineering studies is warranted and identifies key risks and areas for further work. The next step is to conduct a feasibility study to evaluate the financial viability, the technical and financial risks, and the robustness of the project.

This is when the mining company makes the decision whether to develop the mine or to walk away from the project. This includes mine planning to evaluate the economically recoverable portion of the deposit, the metallurgy and ore recoverability, marketability and payability of the ore concentrates, engineering concerns, milling and infrastructure costs, finance and equity requirements, and an analysis of the proposed mine from the initial excavation all the way through to reclamation. The proportion of a deposit that is economically recoverable is dependent on the enrichment factor of the ore in the area.

To gain access to the mineral deposit within an area it is often necessary to mine through or remove waste material which is not of immediate interest to the miner. The total movement of ore and waste constitutes the mining process. Often more waste than ore is mined during the life of a mine, depending on the nature and location of the ore body. Waste removal and placement is a major cost to the mining operator, so a detailed characterization of the waste material forms an essential part of the geological exploration program for a mining operation.

Once the analysis determines a given ore body is worth recovering, development begins to create access to the ore body. The mine buildings and processing plants are built, and any necessary equipment is obtained. The operation of the mine to recover the ore begins and continues as long as the company operating the mine finds it economical to do so. Once all the ore that the mine can produce profitably is recovered, reclamation can begin, to make the land used by the mine suitable for future use.

Technical and economic challenges notwithstanding, successful mine development must also address human factors. Working conditions are paramount to success, especially with regard to exposures to dusts, radiation, noise, explosives hazards, and vibration, as well as illumination standards. Mining today increasingly must address environmental and community impacts, including psychological and sociological dimensions. Thus, mining educator Frank T. M. White (1909–1971), broadened the focus to the “total environment of mining”, including reference to community development around mining, and how mining is portrayed to an urban society, which depends on the industry, although seemingly unaware of this dependency. He stated, “[I]n the past, mining engineers have not been called upon to study the psychological, sociological and personal problems of their own industry – aspects that nowadays are assuming tremendous importance. The mining engineer must rapidly expand his knowledge and his influence into these newer fields.”

Techniques

Underground longwall mining.

Mining techniques can be divided into two common excavation types: surface mining and sub-surface (underground) mining. Today, surface mining is much more common, and produces, for example, 85% of minerals (excluding petroleum and natural gas) in the United States, including 98% of metallic ores.

Targets are divided into two general categories of materials: placer deposits, consisting of valuable minerals contained within river gravels, beach sands, and other unconsolidated materials; and lode deposits, where valuable minerals are found in veins, in layers, or in mineral grains generally distributed throughout a mass of actual rock. Both types of ore deposit, placer or lode, are mined by both surface and underground methods.

Some mining, including much of the rare earth elements and uranium mining, is done by less-common methods, such as in-situ leaching: this technique involves digging neither at the surface nor underground. The extraction of target minerals by this technique requires that they be soluble, e.g., potash, potassium chloride, sodium chloride, sodium sulfate, which dissolve in water. Some minerals, such as copper minerals and uranium oxide, require acid or carbonate solutions to dissolve.

Artisanal

Artisanal gold mines near Dodoma, Tanzania. Makeshift sails lead fresh air underground.

An artisanal miner or small-scale miner (ASM) is a subsistence miner who is not officially employed by a mining company, but works independently, mining minerals using their own resources, usually by hand.

Small-scale mining includes enterprises or individuals that employ workers for mining, but generally still using manually-intensive methods, working with hand tools.

Interior of an artisanal mine near Low's Creek, Mpumalanga Province, South Africa. The human figures, exploring this mine, show the scale of tunnels driven entirely with hand tools (two-kilogram (4.4 lb) hammer and hand-forged scrap-steel chisel).

Artisanal miners often undertake the activity of mining seasonally – for example crops are planted in the rainy season, and mining is pursued in the dry season. However, they also frequently travel to mining areas and work year-round. There are four broad types of ASM: permanent artisanal mining, seasonal (annually migrating during idle agriculture periods), rush-type (massive migration, pulled often by commodity price jumps), and shock-push (poverty-drive, following conflict or natural disasters).

ASM is an important socio-economic sector for the rural poor in many developing nations, many of whom have few other options for supporting their families. Over 90% of the world's mining workforce are ASM. There are an estimated 40.5 million men, women and children directly engaged in ASM, from over 80 countries in the global south. 20% of the global gold supply is produced by the ASM sector, as well as 80% of the global gemstone and 20% of global diamond supply, and 25% of global tin production. More than 150 million depend on ASM for their livelihood. 70 - 80% of small-scale miners are informal, and approximately 30% are women, although this ranges in certain countries and commodities from 5% to 80%.

Surface

Surface mining is done by removing surface vegetation, dirt, and bedrock to reach buried ore deposits. Techniques of surface mining include: open-pit mining, which is the recovery of materials from an open pit in the ground; quarrying, identical to open-pit mining except that it refers to sand, stone and clay; strip mining, which consists of stripping surface layers off to reveal ore underneath; and mountaintop removal, commonly associated with coal mining, which involves taking the top of a mountain off to reach ore deposits at depth. Most placer deposits, because they are shallowly buried, are mined by surface methods. Finally, landfill mining involves sites where landfills are excavated and processed. Landfill mining has been thought of as a long-term solution to methane emissions and local pollution.

High wall

Coalburg Seam highwall mining at ADDCAR 16 Logan County WV

High wall mining, which evolved from auger mining, is another form of surface mining. In high wall mining, the remaining part of a coal seam previously exploited by other surface-mining techniques has too much overburden to be removed but can still be profitably exploited from the side of the artificial cliff made by previous mining. A typical cycle alternates sumping, which undercuts the seam, and shearing, which raises and lowers the cutter-head boom to cut the entire height of the coal seam. As the coal recovery cycle continues, the cutter-head is progressively launched further into the coal seam. High wall mining can produce thousands of tons of coal in contour-strip operations with narrow benches, previously mined areas, trench mine applications and steep-dip seams.

Underground mining

Mantrip used for transporting miners within an underground mine
 
Caterpillar Highwall Miner HW300 – Technology Bridging Underground and Open Pit Mining

Sub-surface mining consists of digging tunnels or shafts into the earth to reach buried ore deposits. Ore, for processing, and waste rock, for disposal, are brought to the surface through the tunnels and shafts. Sub-surface mining can be classified by the type of access shafts used, and the extraction method or the technique used to reach the mineral deposit. Drift mining utilizes horizontal access tunnels, slope mining uses diagonally sloping access shafts, and shaft mining utilizes vertical access shafts. Mining in hard and soft rock formations requires different techniques.

Other methods include shrinkage stope mining, which is mining upward, creating a sloping underground room, long wall mining, which is grinding a long ore surface underground, and room and pillar mining, which is removing ore from rooms while leaving pillars in place to support the roof of the room. Room and pillar mining often leads to retreat mining, in which supporting pillars are removed as miners retreat, allowing the room to cave in, thereby loosening more ore. Additional sub-surface mining methods include hard rock mining, bore hole mining, drift and fill mining, long hole slope mining, sub level caving, and block caving.

Machines

The Bagger 288 is a bucket-wheel excavator used in strip mining. It is also one of the largest land vehicles of all time.
 
A Bucyrus Erie 2570 dragline and CAT 797 haul truck at the North Antelope Rochelle opencut coal mine

Heavy machinery is used in mining to explore and develop sites, to remove and stockpile overburden, to break and remove rocks of various hardness and toughness, to process the ore, and to carry out reclamation projects after the mine is closed. Bulldozers, drills, explosives and trucks are all necessary for excavating the land. In the case of placer mining, unconsolidated gravel, or alluvium, is fed into machinery consisting of a hopper and a shaking screen or trommel which frees the desired minerals from the waste gravel. The minerals are then concentrated using sluices or jigs.

Large drills are used to sink shafts, excavate stopes, and obtain samples for analysis. Trams are used to transport miners, minerals and waste. Lifts carry miners into and out of mines, and move rock and ore out, and machinery in and out, of underground mines. Huge trucks, shovels and cranes are employed in surface mining to move large quantities of overburden and ore. Processing plants utilize large crushers, mills, reactors, roasters and other equipment to consolidate the mineral-rich material and extract the desired compounds and metals from the ore.

Processing

Once the mineral is extracted, it is often then processed. The science of extractive metallurgy is a specialized area in the science of metallurgy that studies the extraction of valuable metals from their ores, especially through chemical or mechanical means.

Mineral processing (or mineral dressing) is a specialized area in the science of metallurgy that studies the mechanical means of crushing, grinding, and washing that enable the separation (extractive metallurgy) of valuable metals or minerals from their gangue (waste material). Processing of placer ore material consists of gravity-dependent methods of separation, such as sluice boxes. Only minor shaking or washing may be necessary to disaggregate (unclump) the sands or gravels before processing. Processing of ore from a lode mine, whether it is a surface or subsurface mine, requires that the rock ore be crushed and pulverized before extraction of the valuable minerals begins. After lode ore is crushed, recovery of the valuable minerals is done by one, or a combination of several, mechanical and chemical techniques.

Since most metals are present in ores as oxides or sulfides, the metal needs to be reduced to its metallic form. This can be accomplished through chemical means such as smelting or through electrolytic reduction, as in the case of aluminium. Geometallurgy combines the geologic sciences with extractive metallurgy and mining.

In 2018, led by Chemistry and Biochemistry professor Bradley D. Smith, University of Notre Dame researchers "invented a new class of molecules whose shape and size enable them to capture and contain precious metal ions," reported in a study published by the Journal of the American Chemical Society. The new method "converts gold-containing ore into chloroauric acid and extracts it using an industrial solvent. The container molecules are able to selectively separate the gold from the solvent without the use of water stripping." The newly developed molecules can eliminate water stripping, whereas mining traditionally "relies on a 125-year-old method that treats gold-containing ore with large quantities of poisonous sodium cyanide... this new process has a milder environmental impact and that, besides gold, it can be used for capturing other metals such as platinum and palladium," and could also be used in urban mining processes that remove precious metals from wastewater streams.

Environmental effects

Environmental effects of mining can occur at local, regional, and global scales through direct and indirect mining practices. The effects can result in erosion, sinkholes, loss of biodiversity, or the contamination of soil, groundwater, and surface water by the chemicals emitted from mining processes. These processes also affect the atmosphere from the emissions of carbon which have an effect on the quality of human health and biodiversity. Some mining methods (lithium mining, phosphate mining, coal mining, mountaintop removal mining, and sand mining) may have such significant environmental and public health effects that mining companies in some countries are required to follow strict environmental and rehabilitation codes to ensure that the mined area returns to its original state.

Environmental regulation

Iron hydroxide precipitate stains a stream receiving acid drainage from surface coal mining.

Mine operators frequently have to follow some regulatory practices to minimize environmental impact and avoid impacting human health. In better regulated economies, regulations require the common steps of environmental impact assessment, development of environmental management plans, mine closure planning (which must be done before the start of mining operations), and environmental monitoring during operation and after closure. However, in some areas, particularly in the developing world, government regulations may not be well enforced.

For major mining companies and any company seeking international financing, there are a number of other mechanisms to enforce environmental standards. These generally relate to financing standards such as the Equator Principles, IFC environmental standards, and criteria for Socially responsible investing. Mining companies have used this oversight from the financial sector to argue for some level of industry self-regulation. In 1992, a Draft Code of Conduct for Transnational Corporations was proposed at the Rio Earth Summit by the UN Centre for Transnational Corporations (UNCTC), but the Business Council for Sustainable Development (BCSD) together with the International Chamber of Commerce (ICC) argued successfully for self-regulation instead.

This was followed by the Global Mining Initiative which was begun by nine of the largest metals and mining companies and which led to the formation of the International Council on Mining and Metals, whose purpose was to "act as a catalyst" in an effort to improve social and environmental performance in the mining and metals industry internationally. The mining industry has provided funding to various conservation groups, some of which have been working with conservation agendas that are at odds with an emerging acceptance of the rights of indigenous people – particularly the right to make land-use decisions.

Certification of mines with good practices occurs through the International Organization for Standardization (ISO). For example, ISO 9000 and ISO 14001, which certify an "auditable environmental management system", involve short inspections, although they have been accused of lacking rigor. Certification is also available through Ceres' Global Reporting Initiative, but these reports are voluntary and unverified. Miscellaneous other certification programs exist for various projects, typically through nonprofit groups.

The purpose of a 2012 EPS PEAKS paper was to provide evidence on policies managing ecological costs and maximize socio-economic benefits of mining using host country regulatory initiatives. It found existing literature suggesting donors encourage developing countries to:

  • Make the environment-poverty link and introduce cutting-edge wealth measures and natural capital accounts.
  • Reform old taxes in line with more recent financial innovation, engage directly with the companies, enact land use and impact assessments, and incorporate specialized support and standards agencies.
  • Set in play transparency and community participation initiatives using the wealth accrued.

Waste

Location of waste rock storage (center) at Teghut (village) Copper-Molybdenum Mine in Armenia's northern Lori province.

Ore mills generate large amounts of waste, called tailings. For example, 99 tons of waste is generated per ton of copper, with even higher ratios in gold mining – because only 5.3 g of gold is extracted per ton of ore, a ton of gold produces 200,000 tons of tailings. (As time goes on and richer deposits are exhausted – and technology improves – this number is going down to .5 g and less.) These tailings can be toxic. Tailings, which are usually produced as a slurry, are most commonly dumped into ponds made from naturally existing valleys. These ponds are secured by impoundments (dams or embankment dams). In 2000 it was estimated that 3,500 tailings impoundments existed, and that every year, 2 to 5 major failures and 35 minor failures occurred. For example, in the Marcopper mining disaster at least 2 million tons of tailings were released into a local river. In 2015, Barrick Gold Corporation spilled over 1 million liters of cyanide into a total of five rivers in Argentina near their Veladero mine. Since 2007 in central Finland, the Talvivaara Terrafame polymetal mine's waste effluent and leaks of saline mine water have resulted in ecological collapse of a nearby lake. Subaqueous tailings disposal is another option. The mining industry has argued that submarine tailings disposal (STD), which disposes of tailings in the sea, is ideal because it avoids the risks of tailings ponds. The practice is illegal in the United States and Canada, but it is used in the developing world.

The waste is classified as either sterile or mineralized, with acid generating potential, and the movement and storage of this material form a major part of the mine planning process. When the mineralised package is determined by an economic cut-off, the near-grade mineralised waste is usually dumped separately with view to later treatment should market conditions change and it becomes economically viable. Civil engineering design parameters are used in the design of the waste dumps, and special conditions apply to high-rainfall areas and to seismically active areas. Waste dump designs must meet all regulatory requirements of the country in whose jurisdiction the mine is located. It is also common practice to rehabilitate dumps to an internationally acceptable standard, which in some cases means that higher standards than the local regulatory standard are applied.

Industry

The Särkijärvi pit of the apatite mine in Siilinjärvi, Finland

Mining exists in many countries. London is the headquarters for large miners such as Anglo American, BHP and Rio Tinto. The US mining industry is also large, but it is dominated by extraction of coal and other nonmetal minerals (e.g., rock and sand), and various regulations have worked to reduce the significance of mining in the United States. In 2007 the total market capitalization of mining companies was reported at US$962 billion, which compares to a total global market cap of publicly traded companies of about US$50 trillion in 2007. In 2002, Chile and Peru were reportedly the major mining countries of South America. The mineral industry of Africa includes the mining of various minerals; it produces relatively little of the industrial metals copper, lead, and zinc, but according to one estimate has as a percent of world reserves 40% of gold, 60% of cobalt, and 90% of the world's platinum group metals. Mining in India is a significant part of that country's economy. In the developed world, mining in Australia, with BHP founded and headquartered in the country, and mining in Canada are particularly significant. For rare earth minerals mining, China reportedly controlled 95% of production in 2013.

The Bingham Canyon Mine of Rio Tinto's subsidiary, Kennecott Utah Copper.

While exploration and mining can be conducted by individual entrepreneurs or small businesses, most modern-day mines are large enterprises requiring large amounts of capital to establish. Consequently, the mining sector of the industry is dominated by large, often multinational, companies, most of them publicly listed. It can be argued that what is referred to as the 'mining industry' is actually two sectors, one specializing in exploration for new resources and the other in mining those resources. The exploration sector is typically made up of individuals and small mineral resource companies, called "juniors", which are dependent on venture capital. The mining sector is made up of large multinational companies that are sustained by production from their mining operations. Various other industries such as equipment manufacture, environmental testing, and metallurgy analysis rely on, and support, the mining industry throughout the world. Canadian stock exchanges have a particular focus on mining companies, particularly junior exploration companies through Toronto's TSX Venture Exchange; Canadian companies raise capital on these exchanges and then invest the money in exploration globally. Some have argued that below juniors there exists a substantial sector of illegitimate companies primarily focused on manipulating stock prices.

Mining operations can be grouped into five major categories in terms of their respective resources. These are oil and gas extraction, coal mining, metal ore mining, nonmetallic mineral mining and quarrying, and mining support activities. Of all of these categories, oil and gas extraction remains one of the largest in terms of its global economic importance. Prospecting potential mining sites, a vital area of concern for the mining industry, is now done using sophisticated new technologies such as seismic prospecting and remote-sensing satellites. Mining is heavily affected by the prices of the commodity minerals, which are often volatile. The 2000s commodities boom ("commodities supercycle") increased the prices of commodities, driving aggressive mining. In addition, the price of gold increased dramatically in the 2000s, which increased gold mining; for example, one study found that conversion of forest in the Amazon increased six-fold from the period 2003–2006 (292 ha/yr) to the period 2006–2009 (1,915 ha/yr), largely due to artisanal mining.

Corporate classifications

Mining companies can be classified based on their size and financial capabilities:

  • Major companies are considered to have an adjusted annual mining-related revenue of more than US$500 million, with the financial capability to develop a major mine on its own.
  • Intermediate companies have at least $50 million in annual revenue but less than $500 million.
  • Junior companies rely on equity financing as their principal means of funding exploration. Juniors are mainly pure exploration companies, but may also produce minimally, and do not have a revenue exceeding US$50 million.

Re their valuation, and stock market characteristics, see Valuation (finance) § Valuation of mining projects.

Regulation and governance

EITI Global Conference 2016

New regulations and a process of legislative reforms aim to improve the harmonization and stability of the mining sector in mineral-rich countries. New legislation for mining industry in African countries still appears to be an issue, but has the potential to be solved, when a consensus is reached on the best approach. By the beginning of the 21st century the booming and increasingly complex mining sector in mineral-rich countries was providing only slight benefits to local communities, especially in given the sustainability issues. Increasing debate and influence by NGOs and local communities called for new approaches which would also include disadvantaged communities, and work towards sustainable development even after mine closure (including transparency and revenue management). By the early 2000s, community development issues and resettlements became mainstream concerns in World Bank mining projects. Mining-industry expansion after mineral prices increased in 2003 and also potential fiscal revenues in those countries created an omission in the other economic sectors in terms of finances and development. Furthermore, this highlighted regional and local demand for mining revenues and an inability of sub-national governments to effectively use the revenues. The Fraser Institute (a Canadian think tank) has highlighted the environmental protection laws in developing countries, as well as voluntary efforts by mining companies to improve their environmental impact.

In 2007 the Extractive Industries Transparency Initiative (EITI) was mainstreamed in all countries cooperating with the World Bank in mining industry reform. The EITI operates and was implemented with the support of the EITI multi-donor trust fund, managed by the World Bank. The EITI aims to increase transparency in transactions between governments and companies in extractive industries by monitoring the revenues and benefits between industries and recipient governments. The entrance process is voluntary for each country and is monitored by multiple stakeholders including governments, private companies and civil society representatives, responsible for disclosure and dissemination of the reconciliation report; however, the competitive disadvantage of company-by-company public report is for some of the businesses in Ghana at least, the main constraint. Therefore, the outcome assessment in terms of failure or success of the new EITI regulation does not only "rest on the government's shoulders" but also on civil society and companies.

On the other hand, implementation has issues; inclusion or exclusion of artisanal mining and small-scale mining (ASM) from the EITI and how to deal with "non-cash" payments made by companies to subnational governments. Furthermore, the disproportionate revenues the mining industry can bring to the comparatively small number of people that it employs, causes other problems, like a lack of investment in other less lucrative sectors, leading to swings in government revenue because of volatility in the oil markets. Artisanal mining is clearly an issue in EITI Countries such as the Central African Republic, D.R. Congo, Guinea, Liberia and Sierra Leone – i.e. almost half of the mining countries implementing the EITI. Among other things, limited scope of the EITI involving disparity in terms of knowledge of the industry and negotiation skills, thus far flexibility of the policy (e.g. liberty of the countries to expand beyond the minimum requirements and adapt it to their needs), creates another risk of unsuccessful implementation. Public awareness increase, where government should act as a bridge between public and initiative for a successful outcome of the policy is an important element to be considered.

World Bank

World Bank logo

The World Bank has been involved in mining since 1955, mainly through grants from its International Bank for Reconstruction and Development, with the Bank's Multilateral Investment Guarantee Agency offering political risk insurance. Between 1955 and 1990 it provided about $2 billion to fifty mining projects, broadly categorized as reform and rehabilitation, greenfield mine construction, mineral processing, technical assistance, and engineering. These projects have been criticized, particularly the Ferro Carajas project of Brazil, begun in 1981. The World Bank established mining codes intended to increase foreign investment; in 1988 it solicited feedback from 45 mining companies on how to increase their involvement.

In 1992 the World Bank began to push for privatization of government-owned mining companies with a new set of codes, beginning with its report The Strategy for African Mining. In 1997, Latin America's largest miner Companhia Vale do Rio Doce (CVRD) was privatized. These and other developments such as the Philippines 1995 Mining Act led the bank to publish a third report (Assistance for Minerals Sector Development and Reform in Member Countries) which endorsed mandatory environment impact assessments and attention to the concerns of the local population. The codes based on this report are influential in the legislation of developing nations. The new codes are intended to encourage development through tax holidays, zero custom duties, reduced income taxes, and related measures. The results of these codes were analyzed by a group from the University of Quebec, which concluded that the codes promote foreign investment but "fall very short of permitting sustainable development". The observed negative correlation between natural resources and economic development is known as the resource curse.

Safety

Mining transport in Devnya, Bulgaria.
 
A coal miner in West Virginia spraying rockdust to reduce the combustible fraction of coal dust in the air.
 

Safety has long been a concern in the mining business, especially in sub-surface mining. The Courrières mine disaster, Europe's worst mining accident, involved the death of 1,099 miners in Northern France on March 10, 1906. This disaster was surpassed only by the Benxihu Colliery accident in China on April 26, 1942, which killed 1,549 miners. While mining today is substantially safer than it was in previous decades, mining accidents still occur. Government figures indicate that 5,000 Chinese miners die in accidents each year, while other reports have suggested a figure as high as 20,000. Mining accidents continue worldwide, including accidents causing dozens of fatalities at a time such as the 2007 Ulyanovskaya Mine disaster in Russia, the 2009 Heilongjiang mine explosion in China, and the 2010 Upper Big Branch Mine disaster in the United States. Mining has been identified by the National Institute for Occupational Safety and Health (NIOSH) as a priority industry sector in the National Occupational Research Agenda (NORA) to identify and provide intervention strategies regarding occupational health and safety issues. The Mining Safety and Health Administration (MSHA) was established in 1978 to "work to prevent death, illness, and injury from mining and promote safe and healthful workplaces for US miners." Since its implementation in 1978, the number of miner fatalities has decreased from 242 miners in 1978 to 24 miners in 2019.

There are numerous occupational hazards associated with mining, including exposure to rockdust which can lead to diseases such as silicosis, asbestosis, and pneumoconiosis. Gases in the mine can lead to asphyxiation and could also be ignited. Mining equipment can generate considerable noise, putting workers at risk for hearing loss. Cave-ins, rock falls, and exposure to excess heat are also known hazards. The current NIOSH Recommended Exposure Limit (REL) of noise is 85 dBA with a 3 dBA exchange rate and the MSHA Permissible Exposure Limit (PEL) is 90 dBA with a 5 dBA exchange rate as an 8-hour time-weighted average. NIOSH has found that 25% of noise-exposed workers in Mining, Quarrying, and Oil and Gas Extraction have hearing impairment. The prevalence of hearing loss increased by 1% from 1991 to 2001 within these workers.

Noise studies have been conducted in several mining environments. Stageloaders (84-102 dBA), shearers (85-99 dBA), auxiliary fans (84–120 dBA), continuous mining machines (78–109 dBA), and roof bolters (92–103 dBA) represent some of the noisiest equipment in underground coal mines. Dragline oilers, dozer operators, and welders using air arcing were occupations with the highest noise exposures among surface coal miners. Coal mines had the highest hearing loss injury likelihood.

Human rights

In addition to the environmental impacts of mining processes, a prominent criticism pertaining to this form of extractive practice and of mining companies are the human rights abuses occurring within mining sites and communities in close proximity of them. Frequently, despite being protected by International Labor rights, miners are not given appropriate equipment to provide them with protection from possible mine collapse or from harmful pollutants and chemicals expelled during the mining process, work in inhumane conditions spending numerous hours working in extreme heat, darkness and 14 hour workdays with no allocated time for breaks.

Child labor

Breaker boys: child workers who broke down coal at a mine in South Pittston, Pennsylvania, United States in the early 20th century

Included within the human rights abuses that occur during mining processes are instances of child labor. These instances are a cause for widespread criticism of mines harvesting cobalt, a mineral essential for powering modern technologies such as laptops, smartphones and electric vehicles. Many of these cases of child laborers are found in the Democratic Republic of Congo. Reports have risen of children carrying sacks of cobalt weighing 25 kg from small mines to local traders being paid for their work only in food and accommodation. A number of companies such as Apple, Google, Microsoft and Tesla have been implicated in lawsuits brought by families whose children were severely injured or killed during mining activities in Congo. In December 2019, 14 Congolese families filed a lawsuit against Glencore, a mining company which supplies the essential cobalt to these multinational corporations with allegations of negligence that led to the deaths of children or injuries such as broken spines, emotional distress and forced labor.

Indigenous peoples

There have also been instances of killings and evictions attributed to conflicts with mining companies. Almost a third of 227 murders in 2020 were of Indigenous peoples rights activists on the frontlines of climate change activism linked to logging, mining, large-scale agribusiness, hydroelectric dams, and other infrastructure, according to Global Witness.

The relationship between indigenous peoples and mining is defined by struggles over access to land. In Australia, the Aboriginal Bininj said mining posed a threat to their living culture and could damage sacred heritage sites.

In the Philippines, an anti-mining movement has raised concerns regarding "the total disregard for [Indigenous communities'] ancestral land rights". Ifugao peoples' opposition to mining led a governor to proclaim a ban on mining operations in Mountain Province, Philippines.

In Brazil, more than 170 tribes organized a march to oppose controversial attempts to strip back indigenous land rights and open their territories to mining operations. The United Nations Commission on Human Rights has called on Brazil's Supreme Court to uphold Indigenous land rights to prevent exploitation by mining groups and industrial agriculture.

Records

Chuquicamata, Chile, site of the largest circumference and second deepest open pit copper mine in the world.

As of 2019, Mponeng is the world's deepest mine from ground level, reaching a depth of 4 km (2.5 mi) below ground level. The trip from the surface to the bottom of the mine takes over an hour. It is a gold mine in South Africa's Gauteng province. Previously known as Western Deep Levels #1 Shaft, the underground and surface works were commissioned in 1987. The mine is considered to be one of the most substantial gold mines in the world.

The Moab Khutsong gold mine in North West Province (South Africa) has the world's longest winding steel wire rope, which is able to lower workers to 3,054 metres (10,020 ft) in one uninterrupted four-minute journey.

The deepest mine in Europe is the 16th shaft of the uranium mines in Příbram, Czech Republic, at 1,838 metres (6,030 ft). Second is Bergwerk Saar in Saarland, Germany, at 1,750 metres (5,740 ft).

The deepest open-pit mine in the world is Bingham Canyon Mine in Bingham Canyon, Utah, United States, at over 1,200 metres (3,900 ft). The largest and second deepest open-pit copper mine in the world is Chuquicamata in northern Chile at 900 metres (3,000 ft), which annually produces 443,000 tons of copper and 20,000 tons of molybdenum.

The deepest open-pit mine with respect to sea level is Tagebau Hambach in Germany, where the base of the pit is 299 metres (981 ft) below sea level.

The largest underground mine is Kiirunavaara Mine in Kiruna, Sweden. With 450 kilometres (280 mi) of roads, 40 million tonnes of annually produced ore, and a depth of 1,270 metres (4,170 ft), it is also one of the most modern underground mines. The deepest borehole in the world is Kola Superdeep Borehole at 12,262 metres (40,230 ft), but this is connected to scientific drilling, not mining.

Metal reserves and recycling

Macro of native copper about 1+12 inches (4 cm) in size.
 
The Pyhäsalmi Mine, a metal mine in Pyhäjärvi, Finland
 
A metal recycling plant in South Carolina that has been abandoned for years.
 

During the 20th century, the variety of metals used in society grew rapidly. Today, the development of major nations such as China and India and advances in technologies are fueling an ever-greater demand. The result is that metal mining activities are expanding and more and more of the world's metal stocks are above ground in use rather than below ground as unused reserves. An example is the in-use stock of copper. Between 1932 and 1999, copper in use in the US rose from 73 kilograms (161 lb) to 238 kilograms (525 lb) per person.

95% of the energy used to make aluminium from bauxite ore is saved by using recycled material. However, levels of metals recycling are generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme (UNEP), published reports on metal stocks that exist within society and their recycling rates.

The report's authors observed that the metal stocks in society can serve as huge mines above ground. However, they warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars, and fuel cells are so low that unless future end-of-life recycling rates are dramatically stepped up these critical metals will become unavailable for use in modern technology.

As recycling rates are low and so much metal has already been extracted, some landfills now contain higher concentrations of metal than mines themselves. This is especially true of aluminium, used in cans, and precious metals, found in discarded electronics. Furthermore, waste after 15 years has still not broken down, so less processing would be required when compared to mining ores. A study undertaken by Cranfield University has found £360 million of metals could be mined from just four landfill sites. There is also up to 20 MJ/kg of energy in waste, potentially making the re-extraction more profitable. However, although the first landfill mine opened in Tel Aviv, Israel in 1953, little work has followed due to the abundance of accessible ores.

Social model of disability

From Wikipedia, the free encyclopedia

The social model of disability identifies systemic barriers, derogatory attitudes, and social exclusion (intentional or inadvertent), which make it difficult or impossible for disabled people to attain their valued functionings. The social model of disability diverges from the dominant medical model of disability, which is a functional analysis of the body as a machine to be fixed in order to conform with normative values. While physical, sensory, intellectual, or psychological variations may result in individual functional differences, these do not necessarily have to lead to disability unless society fails to take account of and include people intentionally with respect to their individual needs. The origin of the approach can be traced to the 1960s, and the specific term emerged from the United Kingdom in the 1980s.

The social model of disability seeks to redefine disability to refer to the restrictions caused by society when it does not give equitable social and structural support according to disabled peoples' structural needs. As a simple example, if a person is unable to climb stairs, the medical model focuses on making the individual physically able to climb stairs. The social model tries to make stair-climbing unnecessary, such as by making society adapt to their needs, and assist them by replacing the stairs with a wheelchair-accessible ramp. According to the social model, the person remains disabled with respect to climbing stairs, but the disability is negligible and no longer disabling in that scenario, because the person can get to the same locations without climbing any stairs.

History

Disability rights movement

There is a hint from before the 1970s that the interaction between disability and society was beginning to be considered. British politician and disability rights campaigner Alf Morris wrote in 1969 (emphasis added):

When the title of my Bill was announced, I was frequently asked what kind of improvements for the chronically sick and disabled I had in mind. It always seemed best to begin with the problems of access. I explained that I wanted to remove the severe and gratuitous social handicaps inflicted on disabled people, and often on their families and friends, not just by their exclusion from town and county halls, art galleries, libraries and many of the universities, but even from pubs, restaurants, theatres, cinemas and other places of entertainment ... I explained that I and my friends were concerned to stop society from treating disabled people as if they were a separate species.

The history of the social model of disability begins with the history of the disability rights movement. Around 1970, various groups in North America, including sociologists, disabled people, and disability-focused political groups, began to pull away from the accepted medical lens of viewing disability. Instead, they began to discuss things like oppression, civil rights, and accessibility. This change in discourse resulted in conceptualizations of disability that was rooted in social constructs.

In 1975, the UK organization Union of the Physically Impaired Against Segregation (UPIAS) claimed: "In our view it is society which disables physically impaired people. Disability is something imposed on top of our impairments by the way we are unnecessarily isolated and excluded from full participation in society." This became known as the social interpretation, or social definition, of disability.

Mike Oliver

In 1983, the disabled academic Mike Oliver coined the phrase social model of disability in reference to these ideological developments. Oliver focused on the idea of an individual model versus a social model. Oliver's seminal 1990 book The Politics of Disablement is widely cited as a major moment in the adoption of this model. The book included just three pages about the social model of disability.

Developments

The "social model" was extended and developed by academics and activists in Australia, the UK, the US, and other countries to include all disabled people, including those who have learning disabilities, intellectual disabilities, or emotional, mental health or behavioural problems.

Tool for cultural analysis

The social model has become a key tool in the analysis of the cultural representation of disability; from literature, to radio, to charity-imagery to cinema. The social model has become the key conceptual analysis in challenging, for examples, stereotypes and archetypes of disabled people by revealing how conventional imagery reinforces the oppression of disabled people. Key theorists include Paul Darke (cinema), Lois Keith (literature), Leonard Davis (Deaf culture), Jenny Sealey (theatre) and Mary-Pat O'Malley (radio).

Components and usage

A fundamental aspect of the social model concerns equality. The struggle for equality is often compared to the struggles of other socially marginalized groups. Equal rights are said to empower people with the "ability" to make decisions and the opportunity to live life to the fullest. A related phrase often used by disability rights activists, as with other social activism, is "Nothing About Us Without Us".

The social model of disability focuses on changes required in society. These might be in terms of:

  • Attitudes, for example a more positive attitude towards certain mental traits or behaviors, or not underestimating the potential quality of life of disabled people,
  • Social support, for example help dealing with barriers; resources, aids, or positive discrimination to provide equal access, for example providing someone to explain work culture for an autistic employee,
  • Information, for example using suitable formats (e.g. braille) or levels (e.g. simplicity of language) or coverage (e.g. explaining issues others may take for granted),
  • Physical structures, for example buildings with sloped access and elevators, or
  • Flexible work hours for people with circadian rhythm sleep disorders.

Limitations

Oliver did not intend the social model of disability to be an all-encompassing theory of disability, but rather a starting point in reframing how society views disability. This model was conceived of as a tool that could be used to improve the lives of disabled people, rather than a complete explanation for every experience and circumstance.

It has been criticized for underplaying the role of disabilities. It has also been criticized for not promoting the normal differences between disabled people, who can be any age, gender, race, and sexual orientation, and instead presenting them as a monolithic, insufficiently individuated group of people.

As an identity

In the late 20th century and early 21st century, the social model of disability became a dominant identity for disabled people in the UK.

The social model of disability implies that attempts to change, "fix", or "cure" individuals, especially when used against the wishes of the individual, can be discriminatory and prejudiced. This attitude, which may be seen as stemming from a medical model and a subjective value system, can harm the self-esteem and social inclusion of those constantly subjected to it (e.g. being told they are not as good or valuable, in an overall and core sense, as others). Some communities have actively resisted "treatments", while, for example, defending a unique culture or set of abilities. In the Deaf community, sign language is valued even if most people do not know it, and some parents argue against cochlear implants for deaf infants who cannot consent to them. Autistic people may say that their "unusual" behavior, which they say can serve an important purpose to them, should not have to be suppressed to please others. They argue instead for acceptance of neurodiversity and accommodation to different needs and goals. Some people diagnosed with a mental disorder argue that they are just different and don't necessarily conform. The biopsychosocial model of disease/disability is an attempt by practitioners to address this.

The Neurodiversity label has been used by various mental-disability rights advocates within the context of the social model of disability. The label has been applied to other neurodevelopmental conditions apart from autism, such as attention deficit hyperactivity disorder (ADHD), developmental speech disorders, dyslexia, dysgraphia, dyspraxia, dyscalculia, dysnomia, intellectual disability, and Tourette syndrome, as well as schizophrenia, bipolar disorder, and some mental health conditions such as schizoaffective disorder, antisocial personality disorder, dissociative disorders, and obsessive–compulsive disorder.

The social model implies that practices such as eugenics are founded on social values and a prejudiced understanding of the potential and value of those labeled disabled. "Over 200,000 disabled people were some of the earlier victims of the Holocaust, after Communists, other political enemies, and homosexuals."

A 1986 article stated:

It is important that we do not allow ourselves to be dismissed as if we all come under this one great metaphysical category 'the disabled'. The effect of this is a depersonalization, a sweeping dismissal of our individuality, and a denial of our right to be seen as people with our own uniqueness, rather than as the anonymous constituents of a category or group. These words that lump us all together – 'the disabled', 'spina bifida', 'tetraplegic', 'muscular dystrophy', – are nothing more than terminological rubbish bins into which all the important things about us as people get thrown away.

Economic aspects

The social model also relates to economic empowerment, proposing that people can be disabled by a lack of resources to meet their needs. For example, a disabled person may need support services to be able to participate fully in society, and can become disabled if society cuts access to those support services, perhaps in the name of government austerity measures.

The social model addresses other issues, such as the underestimation of the potential of disabled people to contribute to society and add economic value to society if they are given equal rights and equally suitable facilities and opportunities as others. Economic research on companies that attempt to accommodate disability in their workforce suggest they outperform competitors.

In Autumn 2001, the UK Office for National Statistics identified that approximately one-fifth of the working-age population was disabled, equating to an estimated 7.1 million disabled people, compared to an estimated 29.8 million nondisabled people. This analysis also provided insight into some of the reasons why disabled people weren't in the labor market, such as that the reduction in disability benefits in entering the labor market would not make it worthwhile to enter into employment. A three-pronged approach was suggested: "incentives to work via the tax and benefit system, for example through the Disabled Person's Tax Credit; helping people back into work, for example via the New Deal for Disabled People; and tackling discrimination in the workplace via anti-discrimination policy. Underpinning this are the Disability Discrimination Act (DDA) 1995 and the Disability Rights Commission."

Canada and the United States have operated under the premise that social assistance benefits should not exceed the amount of money earned through labour in order to give citizens an incentive to search for and maintain employment. This has led to widespread poverty amongst disabled citizens. In the 1950s, disability pensions were established and included various forms of direct economic assistance; however, compensation was low. Since the 1970s, both governments have viewed unemployed, disabled citizens as excess labor due to continuous high unemployment rates and have made minimal attempts to increase employment, keeping disabled people at poverty-level incomes due to the 'incentive' principle. Poverty is the most debilitating circumstance disabled people face, resulting in the inability to afford proper medical, technological and other assistance necessary to participate in society.

Law and public policy

In the United Kingdom, the Disability Discrimination Act defines disability using the medical model - disabled people are defined as people with certain conditions or limitations on their ability to carry out "normal day-to-day activities." But the requirement of employers and service providers to make "reasonable adjustments" to their policies or practices, or physical aspects of their premises, follows the social model. By making adjustments, employers and service providers are removing the barriers that disable, according to the social model. In 2006, amendments to the act called for local authorities and others to actively promote disability equality; this was enforced via the formation of the Disability Equality Duty in December 2006. In 2010, The Disability Discrimination Act (1995) was amalgamated into the Equality Act 2010, along with other pertinent discrimination legislation. The Equality Act of 2010 extends the law on discrimination to indirect discrimination. For example, if a carer of a disabled person is discriminated against, this is now also unlawful. Since October 2010, when it came into effect, employers may not legally ask questions about illness or disability at interviews for a job or for a referee to comment on such in a reference, except where there is a need to make reasonable adjustments for an interview to proceed. Following an offer of a job, an employer can lawfully ask such questions.

In the United States, the Americans with Disabilities Act of 1990 (ADA), is a wide-ranging civil rights law that prohibits discrimination based on disability in a wide range of settings. The ADA was the first civil rights law of its kind in the world and affords protections against discrimination to disabled Americans. The law was modeled after the Civil Rights Act of 1964, which made discrimination based on race, religion, sex, national origin, and other characteristics illegal. It requires that mass transportation, commercial buildings, and public accommodations be accessible to disabled people.

In 2007, the European Court of Justice in the Chacón Navas v Eurest Colectividades SA court case, defined disability narrowly according to a medical definition that excluded temporary illness, when considering the Directive establishing a general framework for equal treatment in employment and occupation (Council Directive 2000/78/EC). The directive did not provide for any definition of disability, despite discourse in policy documents previously in the EU about endorsing the social model of disability. This allowed the Court of Justice to take a narrow medical definition.

Technology

Over the last several decades, technology has transformed networks, services, and communication by promoting the rise of telecommunications, computer use, etc. This Digital Revolution has changed how people work, learn, and interact, moving these basic human activities to technological platforms. However, many people who use such technology experience a form of disability. Even if it is not physically visible, those with, for example cognitive impairments, hand tremors, or vision impairments, have some form of disability that prohibit them from fully accessing technology in the way that those without a "technological disability" do.

In "Disability and New Media," Katie Ellis and Mike Kent state that "technology is often presented as a source of liberation; however, developments associated with Web 2.0 show that this is not always the case." They go on to state that the technological advancement of Web 2.0 is tethered to social ideology and stigma which "routinely disables people with disability."

In "Digital Disability: The Social Construction of Disability in New Media," Gregg Goggin and Christopher Newell call for an innovative understanding of new media and disability issues. They trace developments ranging from telecommunications to assistive technologies to offer a technoscience of disability ,which offers a global perspective on how disabled people are represented as users, consumers, viewers, or listeners of new media, by policymakers, corporations, programmers, and disabled people themselves.

Social construction of disability

The social construction of disability comes from a paradigm that suggests that society's beliefs about a particular community, group, or population are grounded in the power structures inherent in that society at any given time. These are often steeped in historical representations of the issue and social expectations surrounding concepts, such as disability, thereby enabling a social construct around what society deems disabled and healthy.

Ideas surrounding disability stem from societal attitudes, often connected to who is deserving or undeserving, and deemed productive to society at any given time. For example, in the medieval period, a person's moral behavior established disability. Disability was a divine punishment or side effect of a moral failing; being physically or biologically different was not enough to be considered disabled. Only during the European Enlightenment did society change its definition of disability to be more related to biology. However, what most Western Europeans considered to be healthy determined the new biological definition of health.

2000 Paralympics

Since the invention of the television in the early 1900s, this medium has held a pervasive influence on public outlook on many aspects of society, disability being one of them. One example is how the 2000 Paralympics were televised, in contrast to the Olympics. The 2000 Sydney Paralympic Games, one of the biggest in history, was barely acknowledged by mainstream media prior to the event. The Sydney Paralympic organizers worked extensively to try to solicit coverage of the Games. For more than two years, they negotiated with Channel 7 to broadcast the competitions. Channel 7 proposed that if the Paralympics paid them $3 million in case of lack of advertising revenue, they would agree to broadcast the event. Eventually, the Australian Broadcasting Company (ABC) and Channel 7 announced they would be broadcasting the Games and Channel 7 would "complement" the coverage with a highlights package that ran daily on its pay-TV Channel. ABC also promised to broadcast at least 60 minutes of daily highlights. Later, ABC finally agreed to air a live broadcast of the opening and closing ceremonies. The opening and closing ceremonies were quite popular amongst viewers, watched by 2.5 million; however the rest of the games were not popular. While the Olympics were covered live throughout the entire event, the Paralympics were not seen as important enough for the same live coverage before the initial showing. By separating the Olympics and Paralympics, and thus indicating that one is less valuable than the other, disability is socially constructed.

Applications

Applying the social model of disability can change goals and care plans. For example, with the medical model of disability, the goal may be to help a child acquire typical abilities and to reduce impairment. With the social model, the goal may be to have a child be included in the normal life of the community, such as attending birthday parties and other social events, regardless of the level of function.

Education

It has been suggested that disability education tries to restore the idea of a moral community, one in which the members question what constitutes a good life, reimagine education, see physical and mental conditions as part of a range of abilities, consider that different talents are distributed in different ways, and understand that all talents should be recognized. In this system, all students would be included in the educational network instead of being set apart as special cases, and it would be acknowledged that all humans have individual needs.

X-ray microtomography

From Wikipedia, the free encyclopedia
 
0:46
3D rendering of a µCT scan of a leaf piece, resolution circa 40 µm/voxel.
 
Two phase µCT analysis of Ti2AlC/Al MAX phase composite

X-ray microtomography, like tomography and X-ray computed tomography, uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model (3D model) without destroying the original object. The prefix micro- (symbol: µ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range. These pixel sizes have also resulted in the terms high-resolution X-ray tomography, micro–computed tomography (micro-CT or µCT), and similar terms. Sometimes the terms high-resolution CT (HRCT) and micro-CT are differentiated, but in other cases the term high-resolution micro-CT is used. Virtually all tomography today is computed tomography.

Micro-CT has applications both in medical imaging and in industrial computed tomography. In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals (in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired.

The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers.

Working principle

Imaging system

Fan beam reconstruction

The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems.

Cone beam reconstruction

The cone-beam system is based on a 2D X-ray detector (camera) and an electronic X-ray source, creating projection images that later will be used to reconstruct the image cross-sections.

Open/Closed systems

Open X-ray system

In an open system, X-rays may escape or leak out, thus the operator must stay behind a shield, have special protective clothing, or operate the scanner from a distance or a different room. Typical examples of these scanners are the human versions, or designed for big objects.

Closed X-ray system

In a closed system, X-ray shielding is put around the scanner so the operator can put the scanner on a desk or special table. Although the scanner is shielded, care must be taken and the operator usually carries a dosimeter, since X-rays have a tendency to be absorbed by metal and then re-emitted like an antenna. Although a typical scanner will produce a relatively harmless volume of X-rays, repeated scannings in a short timeframe could pose a danger. Digital detectors with small pixel pitches and micro-focus x-ray tubes are usually employed to yield in high resolution images.

Closed systems tend to become very heavy because lead is used to shield the X-rays. Therefore, the smaller scanners only have a small space for samples.

3D image reconstruction

The principle

Because microtomography scanners offer isotropic, or near isotropic, resolution, display of images does not need to be restricted to the conventional axial images. Instead, it is possible for a software program to build a volume by 'stacking' the individual slices one on top of the other. The program may then display the volume in an alternative manner.

Image reconstruction software

For X-ray microtomography, powerful open source software is available, such as the ASTRA toolbox. The ASTRA Toolbox is a MATLAB and python toolbox of high-performance GPU primitives for 2D and 3D tomography, from 2009–2014 developed by iMinds-Vision Lab, University of Antwerp and since 2014 jointly developed by iMinds-VisionLab, UAntwerpen and CWI, Amsterdam. The toolbox supports parallel, fan, and cone beam, with highly flexible source/detector positioning. A large number of reconstruction algorithms are available, including FBP, ART, SIRT, SART, CGLS.

For 3D visualization, tomviz is a popular open-source tool for tomography.

Volume rendering

Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set, as produced by a microtomography scanner. Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

Image segmentation

Where different structures have similar threshold density, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image.

Typical use

Archaeology

Biomedical

  • Both in vitro and in vivo small animal imaging
  • Neurons
  • Human skin samples
  • Bone samples, including teeth, ranging in size from rodents to human biopsies
  • Lung imaging using respiratory gating
  • Cardiovascular imaging using cardiac gating
  • Imaging of the human eye, ocular microstructures and tumors
  • Tumor imaging (may require contrast agents)
  • Soft tissue imaging
  • Insects – Insect development
  • Parasitology – migration of parasites, parasite morphology

Developmental biology

  • Tracing the development of the extinct Tasmanian tiger during growth in the pouch
  • Model and non-model organisms (elephants, zebrafish, and whales)

Electronics

  • Small electronic components. E.g. DRAM IC in plastic case.

Microdevices

Composite materials and metallic foams

  • Ceramics and Ceramic–Metal composites. Microstructural analysis and failure investigation
  • Composite material with glass fibers 10 to 12 micrometres in diameter

Polymers, plastics

Diamonds

  • Detecting defects in a diamond and finding the best way to cut it.

Food and seeds

  • 3-D imaging of foods using X-ray microtomography
  • Analysing heat and drought stress on food crops

Wood and paper

Building materials

Geology

In geology it is used to analyze micro pores in the reservoir rocks, it can used in microfacies analysis for sequence stratigraphy. In petroleum exploration it is used to model the petroleum flow under micro pores and nano particles.

It can give a resolution up to 1 nm.

Fossils

Microfossils

X-ray microtomography of a radiolarian, Triplococcus acanthicus
This is a microfossil from the Middle Ordovician with four nested spheres. The innermost sphere is highlighted red. Each segment is shown at the same scale.
  • Benthonic foraminifers

Palaeography

  • Digitally unfolding letters of correspondence which employed letterlocking.

Space

Stereo images

  • Visualizing with blue and green or blue filters to see depth

Others

Neuroregeneration

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neuroregeneration

Neuroregeneration refers to the regrowth or repair of nervous tissues, cells or cell products. Such mechanisms may include generation of new neurons, glia, axons, myelin, or synapses. Neuroregeneration differs between the peripheral nervous system (PNS) and the central nervous system (CNS) by the functional mechanisms involved, especially in the extent and speed of repair. When an axon is damaged, the distal segment undergoes Wallerian degeneration, losing its myelin sheath. The proximal segment can either die by apoptosis or undergo the chromatolytic reaction, which is an attempt at repair. In the CNS, synaptic stripping occurs as glial foot processes invade the dead synapse.

Nervous system injuries affect over 90,000 people every year. It is estimated that spinal cord injuries alone affect 10,000 each year. As a result of this high incidence of neurological injuries, nerve regeneration and repair, a subfield of neural tissue engineering, is becoming a rapidly growing field dedicated to the discovery of new ways to recover nerve functionality after injury. The nervous system is divided into two parts: the central nervous system, which consists of the brain and spinal cord, and the peripheral nervous system, which consists of cranial and spinal nerves along with their associated ganglia. While the peripheral nervous system has an intrinsic ability for repair and regeneration, the central nervous system is, for the most part, incapable of self-repair and regeneration. There is currently no treatment for recovering human nerve function after injury to the central nervous system. In addition, multiple attempts at nerve re-growth across the PNS-CNS transition have not been successful. There is simply not enough knowledge about regeneration in the central nervous system. In addition, although the peripheral nervous system has the capability for regeneration, much research still needs to be done to optimize the environment for maximum regrowth potential. Neuroregeneration is important clinically, as it is part of the pathogenesis of many diseases, including multiple sclerosis.

Peripheral nervous system regeneration

Guillain–Barré syndrome – nerve damage

Neuroregeneration in the peripheral nervous system (PNS) occurs to a significant degree. After an injury to the axon, peripheral neurons activate a variety of signaling pathways which turn on pro-growth genes, leading to reformation of a functional growth cone and regeneration. The growth of these axons is also governed by chemotactic factors secreted from Schwann cells. Injury to the peripheral nervous system immediately elicits the migration of phagocytes, Schwann cells, and macrophages to the lesion site in order to clear away debris such as damaged tissue which is inhibitory to regeneration. When a nerve axon is severed, the end still attached to the cell body is labeled the proximal segment, while the other end is called the distal segment. After injury, the proximal end swells and experiences some retrograde degeneration, but once the debris is cleared, it begins to sprout axons and the presence of growth cones can be detected. The proximal axons are able to regrow as long as the cell body is intact, and they have made contact with the Schwann cells in the endoneurium (also known as the endoneurial tube or channel). Human axon growth rates can reach 2 mm/day in small nerves and 5 mm/day in large nerves. The distal segment, however, experiences Wallerian degeneration within hours of the injury; the axons and myelin degenerate, but the endoneurium remains. In the later stages of regeneration the remaining endoneurial tube directs axon growth back to the correct targets. During Wallerian degeneration, Schwann cells grow in ordered columns along the endoneurial tube, creating a band of Büngner cells that protects and preserves the endoneurial channel. Also, macrophages and Schwann cells release neurotrophic factors that enhance re-growth.

Central nervous system regeneration

Unlike peripheral nervous system injury, injury to the central nervous system is not followed by extensive regeneration. It is limited by the inhibitory influences of the glial and extracellular environment. The hostile, non-permissive growth environment is, in part, created by the migration of myelin-associated inhibitors, astrocytes, oligodendrocytes, oligodendrocyte precursors, and microglia. The environment within the CNS, especially following trauma, counteracts the repair of myelin and neurons. Growth factors are not expressed or re-expressed; for instance, the extracellular matrix is lacking laminins. Glial scars rapidly form, and the glia actually produce factors that inhibit remyelination and axon repair; for instance, NOGO and NI-35. The axons themselves also lose the potential for growth with age, due to a decrease in GAP43 expression, among others.

Slower degeneration of the distal segment than that which occurs in the peripheral nervous system also contributes to the inhibitory environment because inhibitory myelin and axonal debris are not cleared away as quickly. All these factors contribute to the formation of what is known as a glial scar, which axons cannot grow across. The proximal segment attempts to regenerate after injury, but its growth is hindered by the environment. It is important to note that central nervous system axons have been proven to regrow in permissive environments; therefore, the primary problem to central nervous system axonal regeneration is crossing or eliminating the inhibitory lesion site. Another problem is that the morphology and functional properties of central nervous system neurons are highly complex, for this reason a neuron functionally identical cannot be replaced by one of another type (Llinás' law).

Inhibition of axonal regrowth

Glial cell scar formation is induced following damage to the nervous system. In the central nervous system, this glial scar formation significantly inhibits nerve regeneration, which leads to a loss of function. Several families of molecules are released that promote and drive glial scar formation. For instance, transforming growth factors B-1 and -2, interleukins, and cytokines play a role in the initiation of scar formation. The accumulation of reactive astrocytes at the site of injury and the up regulation of molecules that are inhibitory for neurite outgrowth contribute to the failure of neuroregeneration. The up-regulated molecules alter the composition of the extracellular matrix in a way that has been shown to inhibit neurite outgrowth extension. This scar formation involves several cell types and families of molecules.

Chondroitin sulfate proteoglycan

In response to scar-inducing factors, astrocytes up regulate the production of chondroitin sulfate proteoglycans. Astrocytes are a predominant type of glial cell in the central nervous system that provide many functions including damage mitigation, repair, and glial scar formation. The RhoA pathway is involved. Chondroitin sulfate proteoglycans (CSPGs) have been shown to be up regulated in the central nervous system (CNS) following injury. Repeating disaccharides of glucuronic acid and galactosamine, glycosaminoglycans (CS-GAGs), are covalently coupled to the protein core CSPGs. CSPGs have been shown to inhibit regeneration in vitro and in vivo, but the role that the CSPG core protein vs. CS-GAGs had not been studied until recently.

Keratan sulfate proteoglycans

Like the chondroitin sulfate proteoglycans, keratan sulfate proteoglycan (KSPG) production is up regulated in reactive astrocytes as part of glial scar formation. KSPGs have also been shown to inhibit neurite outgrowth extension, limiting nerve regeneration. Keratan sulfate, also called keratosulfate, is formed from repeating disaccharide galactose units and N-acetylglucosamines. It is also 6-sulfated. This sulfation is crucial to the elongation of the keratan sulfate chain. A study was done using N-acetylglucosamine 6-O-sulfotransferase-1 deficient mice. The wild type mouse showed a significant up regulation of mRNA expressing N-acetylglucosamine 6-O-sulfotransferase-1 at the site of cortical injury. However, in the N-acetylglucosamine 6-O-sulfotransferase-1 deficient mice, the expression of keratan sulfate was significantly decreased when compared to the wild type mice. Similarly, glial scar formation was significantly reduced in the N-acetylglucosamine 6-O-sulfotransferase-1 mice, and as a result, nerve regeneration was less inhibited.

Other inhibitory factors

Proteins of oligodendritic or glial debris origin that influence neuroregeneration:

  • NOGO –The protein family Nogo, particularly Nogo-A, has been identified as an inhibitor of remyelination in the CNS, especially in autoimmune mediated demyelination, such as found in experimental autoimmune encephalomyelitis (EAE), and multiple sclerosis (MS). Nogo A functions via either its amino-Nogo terminus through an unknown receptor, or by its Nogo-66 terminus through NgR1, p75, TROY or LINGO1. Antagonising this inhibitor results in improved remyelination, as it is involved in the RhoA pathway.
  • NI-35 a non-permissive growth factor from myelin.
  • MAGMyelin-associated glycoprotein acts via the receptors NgR2, GT1b, NgR1, p75, TROY and LINGO1.
  • OMgpOligodendrocyte myelin glycoprotein
  • Ephrin B3 functions through the EphA4 receptor and inhibits remyelination.
  • Sema 4D(Semaphorin 4D) functions through the PlexinB1 receptor and inhibits remyelination.
  • Sema 3A (Semaphorin 3A) is present in the scar that forms in both central nervous system and peripheral nerve injuries and contributes to the outgrowth-inhibitory properties of these scars

Clinical treatments

Surgery

Surgery can be done in case a peripheral nerve has become cut or otherwise divided. This is called peripheral nerve reconstruction. The injured nerve is identified and exposed so that normal nerve tissue can be examined above and below the level of injury, usually with magnification, using either loupes or an operating microscope. If a large segment of nerve is harmed, as can happen in a crush or stretch injury, the nerve will need to be exposed over a larger area. Injured portions of the nerve are removed. The cut nerve endings are then carefully reapproximated using very small sutures. The nerve repair must be covered by healthy tissue, which can be as simple as closing the skin or it can require moving skin or muscle to provide healthy padded coverage over the nerve. The type of anesthesia used depends on the complexity of the injury. A surgical tourniquet is almost always used.

Prognosis

The expectations after surgical repair of a divided peripheral nerve depends on several factors:

  • Age: Recovery of a nerve after surgical repair depends mainly on the age of the patient. Young children can recover close-to-normal nerve function. In contrast, a patient over 60 years old with a cut nerve in the hand would expect to recover only protective sensation; that is, the ability to distinguish hot/cold or sharp/dull.
  • The mechanism of injury: Sharp injuries, such as a knife wound, damage only a very short segment of the nerve, availing for direct suture. In contrast, nerves that are divided by stretch or crush may be damaged over long segments. These nerve injuries are more difficult to treat and generally have a poorer outcome. In addition, associated injuries, like injury to bone, muscle and skin, can make nerve recovery more difficult.
  • The level of injury: After a nerve is repaired, the regenerating nerve endings must grow all the way to their target. For example, a nerve injured at the wrist that normally provides sensation to the thumb must grow to the end of the thumb in order to provide sensation. The return of function decreases with increased distance over which a nerve must grow.

Autologous nerve grafting

Currently, autologous nerve grafting, or a nerve autograft, is known as the gold standard for clinical treatments used to repair large lesion gaps in the peripheral nervous system. It is important that nerves are not repaired under tension, which could otherwise happen if cut ends are reapproximated across a gap. Nerve segments are taken from another part of the body (the donor site) and inserted into the lesion to provide endoneurial tubes for axonal regeneration across the gap. However, this is not a perfect treatment; often the outcome is only limited function recovery. Also, partial de-innervation is frequently experienced at the donor site, and multiple surgeries are required to harvest the tissue and implant it.

When appropriate, a nearby donor may be used to supply innervation to lesioned nerves. Trauma to the donor can be minimized by utilizing a technique known as end-to-side repair. In this procedure, an epineurial window is created in the donor nerve and the proximal stump of the lesioned nerve is sutured over the window. Regenerating axons are redirected into the stump. Efficacy of this technique is partially dependent upon the degree of partial neurectomy performed on the donor, with increasing degrees of neurectomy giving rise to increasing axon regeneration within the lesioned nerve, but with the consequence of increasing deficit to the donor.

Some evidence suggests that local delivery of soluble neurotrophic factors at the site of autologous nerve grafting may enhance axon regeneration within the graft and help expedite functional recovery of a paralyzed target. Other evidence suggests that gene-therapy induced expression of neurotrophic factors within the target muscle itself can also help enhance axon regeneration. Accelerating neuroregeneration and the reinnervation of a denervated target is critically important in order to reduce the possibility of permanent paralysis due to muscular atrophy.

Allografts and xenografts

Variations on the nerve autograft include the allograft and the xenograft. In allografts, the tissue for the graft is taken from another person, the donor, and implanted in the recipient. Xenografts involve taking donor tissue from another species. Allografts and xenografts have the same disadvantages as autografts, but in addition, tissue rejection from immune responses must also be taken into account. Often immunosuppression is required with these grafts. Disease transmission also becomes a factor when introducing tissue from another person or animal. Overall, allografts and xenografts do not match the quality of outcomes seen with autografts, but they are necessary when there is a lack of autologous nerve tissue.

Nerve guidance conduit

Because of the limited functionality received from autografts, the current gold standard for nerve regeneration and repair, recent neural tissue engineering research has focused on the development of bioartificial nerve guidance conduits in order to guide axonal regrowth. The creation of artificial nerve conduits is also known as entubulation because the nerve ends and intervening gap are enclosed within a tube composed of biological or synthetic materials.

Immunisation

A direction of research is towards the use of drugs that target remyelinating inhibitor proteins, or other inhibitors. Possible strategies include vaccination against these proteins (active immunisation), or treatment with previously created antibodies (passive immunisation). These strategies appear promising on animal models with experimental autoimmune encephalomyelitis (EAE), a model of MS. Monoclonal antibodies have also been used against inhibitory factors such as NI-35 and NOGO.

Distance education

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Distance_...