Search This Blog

Friday, June 14, 2024

Soap

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Soap
A handmade soap bar
Two equivalent images of the chemical structure of sodium stearate, a typical ingredient found in bar soaps
The chemical structure of sodium laureth sulfate, a typical ingredient found in liquid soaps

Soap is a salt of a fatty acid used in a variety of cleansing and lubricating products. In a domestic setting, soaps are surfactants usually used for washing, bathing, and other types of housekeeping. In industrial settings, soaps are used as thickeners, components of some lubricants, and precursors to catalysts.

When used for cleaning, soap solubilizes particles and grime, which can then be separated from the article being cleaned. In hand washing, as a surfactant, when lathered with a little water, soap kills microorganisms by disorganizing their membrane lipid bilayer and denaturing their proteins. It also emulsifies oils, enabling them to be carried away by running water.

Soap is created by mixing fats and oils with a base. Humans have used soap for millennia; evidence exists for the production of soap-like materials in ancient Babylon around 2800 BC.

History

Ancient Middle East

Box for Amigo del Obrero (Worker's Friend) soap from the 20th century, part of the Museo del Objeto del Objeto collection

It is uncertain as to who was the first to invent soap. The earliest recorded evidence of the production of soap-like materials dates back to around 2800 BC in ancient Babylon. A formula for making soap was written on a Sumerian clay tablet around 2500 BC; the soap was produced by heating a mixture of oil and wood ash, the earliest recorded chemical reaction, and used for washing woolen clothing.

The Ebers papyrus (Egypt, 1550 BC) indicates the ancient Egyptians used soap as a medicine and combined animal fats or vegetable oils with a soda ash substance called trona to create their soaps. Egyptian documents mention a similar substance was used in the preparation of wool for weaving.

In the reign of Nabonidus (556–539 BC), a recipe for soap consisted of uhulu [ashes], cypress [oil] and sesame [seed oil] "for washing the stones for the servant girls".

In the Southern Levant, the ashes from barilla plants, such as species of Salsola, saltwort (Seidlitzia rosmarinus) and Anabasis, were used in soap production, known as potash. Traditionally, olive oil was used instead of animal lard throughout the Levant, which was boiled in a copper cauldron for several days. As the boiling progresses, alkali ashes and smaller quantities of quicklime are added and constantly stirred. In the case of lard, it required constant stirring while kept lukewarm until it began to trace. Once it began to thicken, the brew was poured into a mold and left to cool and harden for two weeks. After hardening, it was cut into smaller cakes. Aromatic herbs were often added to the rendered soap to impart their fragrance, such as yarrow leaves, lavender, germander, etc.

Roman Empire

Pliny the Elder, whose writings chronicle life in the first century AD, describes soap as "an invention of the Gauls". The word sapo, Latin for soap, likely was borrowed from an early Germanic language and is cognate with Latin sebum, "tallow". It first appears in Pliny the Elder's account, Historia Naturalis, which discusses the manufacture of soap from tallow and ashes. There he mentions its use in the treatment of scrofulous sores, as well as among the Gauls as a dye to redden hair which the men in Germania were more likely to use than women. The Romans avoided washing with harsh soaps before encountering the milder soaps used by the Gauls around 58 BC. Aretaeus of Cappadocia, writing in the 2nd century AD, observes among "Celts, which are men called Gauls, those alkaline substances that are made into balls [...] called soap". The Romans' preferred method of cleaning the body was to massage oil into the skin and then scrape away both the oil and any dirt with a strigil. The standard design is a curved blade with a handle, all of which is made of metal.

The 2nd-century AD physician Galen describes soap-making using lye and prescribes washing to carry away impurities from the body and clothes. The use of soap for personal cleanliness became increasingly common in this period. According to Galen, the best soaps were Germanic, and soaps from Gaul were second best. Zosimos of Panopolis, circa 300 AD, describes soap and soapmaking.

Ancient China

A detergent similar to soap was manufactured in ancient China from the seeds of Gleditsia sinensis. Another traditional detergent is a mixture of pig pancreas and plant ash called zhuyizi (simplified Chinese: 猪胰子; traditional Chinese: 豬胰子; pinyin: zhūyízǐ). Soap made of animal fat did not appear in China until the modern era. Soap-like detergents were not as popular as ointments and creams.

Islamic Golden Age

Hard toilet soap with a pleasant smell was produced in the Middle East during the Islamic Golden Age, when soap-making became an established industry. Recipes for soap-making are described by Muhammad ibn Zakariya al-Razi (c. 865–925), who also gave a recipe for producing glycerine from olive oil. In the Middle East, soap was produced from the interaction of fatty oils and fats with alkali. In Syria, soap was produced using olive oil together with alkali and lime. Soap was exported from Syria to other parts of the Muslim world and to Europe.

A 12th-century document describes the process of soap production. It mentions the key ingredient, alkali, which later became crucial to modern chemistry, derived from al-qaly or "ashes".

By the 13th century, the manufacture of soap in the Middle East had become a major cottage industry, with sources in Nablus, Fes, Damascus, and Aleppo.

Medieval Europe

Marseille soap in blocks of 600 g

Soapmakers in Naples were members of a guild in the late sixth century (then under the control of the Eastern Roman Empire), and in the eighth century, soap-making was well known in Italy and Spain. The Carolingian capitulary De Villis, dating to around 800, representing the royal will of Charlemagne, mentions soap as being one of the products the stewards of royal estates are to tally. The lands of Medieval Spain were a leading soapmaker by 800, and soapmaking began in the Kingdom of England about 1200. Soapmaking is mentioned both as "women's work" and as the produce of "good workmen" alongside other necessities, such as the produce of carpenters, blacksmiths, and bakers.

In Europe, soap in the 9th century was produced from animal fats and had an unpleasant smell. This changed when olive oil began to be used in soap formulas instead, after which much of Europe's soap production moved to the Mediterranean olive-growing regions. Hard toilet soap was introduced to Europe by Arabs and gradually spread as a luxury item. It was often perfumed. By the 15th century, the manufacture of soap in the Christendom had become virtually industrialized, with sources in Antwerp, Castile, Marseille, Naples and Venice.

16th–17th century

In France, by the second half of the 16th century, the semi-industrialized professional manufacture of soap was concentrated in a few centers of ProvenceToulon, Hyères, and Marseille—which supplied the rest of France. In Marseilles, by 1525, production was concentrated in at least two factories, and soap production at Marseille tended to eclipse the other Provençal centers. English manufacture tended to concentrate in London.

Finer soaps were later produced in Europe from the 17th century, using vegetable oils (such as olive oil) as opposed to animal fats. Many of these soaps are still produced, both industrially and by small-scale artisans. Castile soap is a popular example of the vegetable-only soaps derived from the oldest "white soap" of Italy. In 1634 Charles I granted the newly formed Society of Soapmakers a monopoly in soap production who produced certificates from 'foure Countesses, and five Viscountesses, and divers other Ladies and Gentlewomen of great credite and quality, besides common Laundresses and others', testifying that 'the New White Soap washeth whiter and sweeter than the Old Soap'.

During the Restoration era (February 1665 – August 1714) a soap tax was introduced in England, which meant that until the mid-1800s, soap was a luxury, used regularly only by the well-to-do. The soap manufacturing process was closely supervised by revenue officials who made sure that soapmakers' equipment was kept under lock and key when not being supervised. Moreover, soap could not be produced by small makers because of a law that stipulated that soap boilers must manufacture a minimum quantity of one imperial ton at each boiling, which placed the process beyond the reach of the average person. The soap trade was boosted and deregulated when the tax was repealed in 1853.

Modern period

Industrially manufactured bar soaps became available in the late 18th century, as advertising campaigns in Europe and America promoted popular awareness of the relationship between cleanliness and health. In modern times, the use of soap has become commonplace in industrialized nations due to a better understanding of the role of hygiene in reducing the population size of pathogenic microorganisms.

Caricature of Lillie Langtry, from Punch, Christmas 1890: The soap box on which she sits reflects her endorsements of cosmetics and soaps.

Until the Industrial Revolution, soapmaking was conducted on a small scale and the product was rough. In 1780, James Keir established a chemical works at Tipton, for the manufacture of alkali from the sulfates of potash and soda, to which he afterwards added a soap manufactory. The method of extraction proceeded on a discovery of Keir's. In 1790, Nicolas Leblanc discovered how to make alkali from common salt. Andrew Pears started making a high-quality, transparent soap, Pears soap, in 1807 in London. His son-in-law, Thomas J. Barratt, became the brand manager (the first of its kind) for Pears in 1865. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster-girl for Pears soap, making her the first celebrity to endorse a commercial product.

William Gossage produced low-priced, good-quality soap from the 1850s. Robert Spear Hudson began manufacturing a soap powder in 1837, initially by grinding the soap with a mortar and pestle. American manufacturer Benjamin T. Babbitt introduced marketing innovations that included the sale of bar soap and distribution of product samples. William Hesketh Lever and his brother, James, bought a small soap works in Warrington in 1886 and founded what is still one of the largest soap businesses, formerly called Lever Brothers and now called Unilever. These soap businesses were among the first to employ large-scale advertising campaigns.

Liquid soap

A soap dispenser

Liquid soap was not invented until the nineteenth century; in 1865, William Sheppard patented a liquid version of soap. In 1898, B.J. Johnson developed a soap derived from palm and olive oils; his company, the B.J. Johnson Soap Company, introduced "Palmolive" brand soap that same year. This new brand of soap became popular rapidly, and to such a degree that B.J. Johnson Soap Company changed its name to Palmolive.

In the early 1900s, other companies began to develop their own liquid soaps. Such products as Pine-Sol and Tide appeared on the market, making the process of cleaning things other than skin, such as clothing, floors, and bathrooms, much easier.

Liquid soap also works better for more traditional or non-machine washing methods, such as using a washboard.

Types

A collection of decorative bar soaps, as often found in hotels

Since they are salts of fatty acids, soaps have the general formula (RCO2)nMn+, where R is an alkyl, M is a metal and n is the charge of the cation. The major classification of soaps is determined by the identity of Mn+. When M is Na (sodium) or K (potassium), the soaps are called toilet soaps, used for handwashing. Many metal dications (Mg2+, Ca2+, and others) give metallic soap. When M is Li, the result is lithium soap (e.g., lithium stearate), which is used in high-performance greases. A cation from an organic base such as ammonium can be used instead of a metal; ammonium nonanoate is an ammonium-based soap that is used as an herbicide.

When used in hard water, soap does not lather well and a scum of stearate, a common ingredient in soap, forms as an insoluble precipitate.

Non-toilet soaps

Soaps are key components of most lubricating greases and thickeners. Greases are usually emulsions of calcium soap or lithium soap and mineral oil. Many other metallic soaps are also useful, including those of aluminium, sodium, and mixtures thereof. Such soaps are also used as thickeners to increase the viscosity of oils. In ancient times, lubricating greases were made by the addition of lime to olive oil.

Metal soaps are also included in modern artists' oil paints formulations as a rheology modifier.

Production of metallic soaps

Most metal soaps are prepared by hydrolysis:

2 RCO2H + CaO → (RCO2)2Ca + H2O

Toilet soaps

In a domestic setting, "soap" usually refers to what is technically called a toilet soap, used for household and personal cleaning. When used for cleaning, soap solubilizes particles and fats/oils, which can then be separated from the article being cleaned. The insoluble oil/fat molecules become associated inside micelles, tiny spheres formed from soap molecules with polar hydrophilic (water-attracting) groups on the outside and encasing a lipophilic (fat-attracting) pocket, which shields the oil/fat molecules from the water making them soluble. Anything that is soluble will be washed away with the water.

Structure of a micelle, a cell-like structure formed by the aggregation of soap subunits (such as sodium stearate): The exterior of the micelle is hydrophilic (attracted to water) and the interior is lipophilic (attracted to oils).

Production of toilet soaps

The production of toilet soaps usually entails saponification of triglycerides, which are vegetable or animal oils and fats. An alkaline solution (often lye or sodium hydroxide) induces saponification whereby the triglyceride fats first hydrolyze into salts of fatty acids. Glycerol (glycerin) is liberated. The glycerin can remain in the soap product as a softening agent, although it is sometimes separated.

The type of alkali metal used determines the kind of soap product. Sodium soaps, prepared from sodium hydroxide, are firm, whereas potassium soaps, derived from potassium hydroxide, are softer or often liquid. Historically, potassium hydroxide was extracted from the ashes of bracken or other plants. Lithium soaps also tend to be hard. These are used exclusively in greases.

For making toilet soaps, triglycerides (oils and fats) are derived from coconut, olive, or palm oils, as well as tallow. Triglyceride is the chemical name for the triesters of fatty acids and glycerin. Tallow, i.e., rendered fat, is the most available triglyceride from animals. Each species offers quite different fatty acid content, resulting in soaps of distinct feel. The seed oils give softer but milder soaps. Soap made from pure olive oil, sometimes called Castile soap or Marseille soap, is reputed for its particular mildness. The term "Castile" is also sometimes applied to soaps from a mixture of oils with a high percentage of olive oil.

Fatty acid content of various fats used for soapmaking

Lauric acid Myristic acid Palmitic acid Stearic acid Oleic acid Linoleic acid Linolenic acid
fats C12 saturated C14 saturated C16 saturated C18 saturated C18 monounsaturated C18 diunsaturated C18 triunsaturated
Tallow 0 4 28 23 35 2 1
Coconut oil 48 18 9 3 7 2 0
Palm kernel oil 46 16 8 3 12 2 0
Palm oil 0 1 44 4 37 9 0
Laurel oil 54 0 0 0 15 17 0
Olive oil 0 0 11 2 78 10 0
Canola oil 0 1 3 2 58 9 23

Soap-making for hobbyists

Manufacturing process of soaps/detergents

A variety of methods are available for hobbyists to make soap. Most soapmakers use processes where the glycerol remains in the product, and the saponification continues for many days after the soap is poured into molds. The glycerol is left during the hot process method, but at the high temperature employed, the reaction is practically completed in the kettle, before the soap is poured into molds. This simple and quick process is employed in small factories all over the world.

Handmade soap from the cold process also differs from industrially made soap in that an excess of fat or (Coconut Oil, Cazumbal Process) are used, beyond that needed to consume the alkali (in a cold-pour process, this excess fat is called "superfatting"), and the glycerol left in acts as a moisturizing agent. However, the glycerine also makes the soap softer. The addition of glycerol and processing of this soap produces glycerin soap. Superfatted soap is more skin-friendly than one without extra fat, although it can leave a "greasy" feel. Sometimes, an emollient is added, such as jojoba oil or shea butter. Sand or pumice may be added to produce a scouring soap. The scouring agents serve to remove dead cells from the skin surface being cleaned. This process is called exfoliation.

To make antibacterial soap, compounds such as triclosan or triclocarban can be added. There is some concern that use of antibacterial soaps and other products might encourage antimicrobial resistance in microorganisms.

Desertification

From Wikipedia, the free encyclopedia
Global distribution of dryland subtypes based on the aridity index computed over a 30-year average during 1981 to 2010. Typical deserts are indicated by the hyper-arid category (light yellow)

Desertification is a type of gradual land degradation of fertile land into arid desert due to a combination of natural processes and human activities. This spread of arid areas is caused by a variety of factors, such as overexploitation of soil as a result of human activity and the effects of climate change. Geographic areas most affected are located in Africa (Sahel region), Asia (Gobi Desert and Mongolia) and parts of South America. Drylands occupy approximately 40–41% of Earth's land area and are home to more than 2 billion people. Effects of desertification include sand and dust storms, food insecurity, and poverty.

Humans can fight desertification in various ways. For instance, improving soil quality, greening deserts, managing grazing better, and planting trees (reforestation and afforestation) can all help reverse desertification.

Throughout geological history, the development of deserts has occurred naturally over long intervals of time. The modern study of desertification emerged from the study of the 1980s drought in the Sahel.

Definitions

As recently as 2005, considerable controversy existed over the proper definition of the term "desertification." Helmut Geist (2005) identified more than 100 formal definitions. The most widely accepted of these was that of the Princeton University Dictionary which defined it as "the process of fertile land transforming into desert typically as a result of deforestation, drought or improper/inappropriate agriculture". This definition clearly demonstrated the interconnectedness of desertification and human activities, in particular land use and land management practices. It also highlighted the economic, social and environmental implications of desertification.

However, this original understanding that desertification involved the physical expansion of deserts has been rejected as the concept has further evolved since then. Desertification has been defined in the text of the United Nations Convention to Combat Desertification (UNCCD) as "land degradation in arid, semi-arid and dry sub-humid regions resulting from various factors, including climatic variations and human activities," according to Hulme and Kelly (1993).

There exists also controversy around the sub-grouping of types of desertification, including, for example, the validity and usefulness of such terms as "man-made desert" and "non-pattern desert".

Causes

Preventing man-made overgrazing
Goats inside of a pen in Norte Chico, Chile. Overgrazing of drylands by poorly managed traditional herding is one of the primary causes of desertification.
 
Wildebeest in Masai Mara during the Great Migration. Overgrazing is not necessarily caused by nomadic grazers in large travelling herd populations.

Immediate causes

The immediate cause of desertification is the loss of most vegetation. This is driven by a number of factors, alone or in combination, such as drought, climatic shifts, tillage for agriculture, overgrazing and deforestation for fuel or construction materials. Though vegetation plays a major role in determining the biological composition of the soil, studies have shown that, in many environments, the rate of erosion and runoff decreases exponentially with increased vegetation cover. Unprotected, dry soil surfaces blow away with the wind or are washed away by flash floods, leaving infertile lower soil layers that bake in the sun and become an unproductive hardpan.

Influence of human activities

Early studies argued one of the most common causes of desertification was overgrazing, over consumption of vegetation by cattle or other livestock. However, the role of local overexploitation in driving desertification in the recent past is controversial. Drought in the Sahel region is now thought to be principally the result of seasonal variability in rainfall caused by large-scale sea surface temperature variations, largely driven by natural variability and anthropogenic emissions of aerosols (reflective sulphate particles) and greenhouse gases. As a result, changing ocean temperature and reductions in sulfate emissions have caused a re-greening of the region. This has led some scholars to argue that agriculture-induced vegetation loss is a minor factor in desertification.

A shepherd guiding his sheep through the high desert outside Marrakech, Morocco

Human population dynamics have a considerable impact on overgrazing, over-farming and deforestation, as previously acceptable techniques have become unsustainable.

There are multiple reasons farmers use intensive farming as opposed to extensive farming but the main reason is to maximize yields. By increasing productivity, they require a lot more fertilizer, pesticides, and labor to upkeep machinery. This continuous use of the land rapidly depletes the nutrients of the soil causing desertification to spread.

Natural variations

Scientists agree that the existence of a desert in the place where the Sahara desert is now located is due to natural variations in solar insolation due to orbital precession of the Earth. Such variations influence the strength of the West African Monsoon, inducing feedback in vegetation and dust emission that amplify the cycle of wet and dry Sahara climate. There is also a suggestion the transition of the Sahara from savanna to desert during the mid-Holocene was partially due to overgrazing by the cattle of the local population.

Climate change

Research into desertification is complex, and there is no single metric which can define all aspects. However, more intense climate change is still expected to increase the current extent of drylands on the Earth's continents: from 38% in late 20th century to 50% or 56% by the end of the century, under the "moderate" and high-warming Representative Concentration Pathways 4.5 and 8.5. Most of the expansion will be seen over regions such as "southwest North America, the northern fringe of Africa, southern Africa, and Australia".

Drylands cover 41% of the earth’s land surface and include 45% of the world’s agricultural land. These regions are among the most vulnerable ecosystems to anthropogenic climate and land use change and are under threat of desertification. An observation-based attribution study of desertification was carried out in 2020 which accounted for climate change, climate variability, CO2 fertilization as well as both the gradual and rapid ecosystem changes caused by land use. The study found that, between 1982 and 2015, 6% of the world’s drylands underwent desertification driven by unsustainable land use practices compounded by anthropogenic climate change. Despite an average global greening, anthropogenic climate change has degraded 12.6% (5.43 million km2) of drylands, contributing to desertification and affecting 213 million people, 93% of who live in developing economies.

Effects

Sand and dust storms

View of Sydney Harbour Bridge covered in dust

There has been a 25% increase in global annual dust emissions between the late nineteenth century to present day. The increase of desertification has also increased the amount of loose sand and dust that the wind can pick up ultimately resulting in a storm. For example, dust storms in the Middle East “are becoming more frequent and intense in recent years” because “long-term reductions in rainfall [cause] lower soil moisture and vegetative cover”.

Dust storms can contribute to certain respiratory disorders such as pneumonia, skin irritations, asthma and many more. They can pollute open water, reduce the effectiveness of clean energy efforts, and halt most forms of transportation.

Dust and sand storms can have a negative effect on the climate which can make desertification worse. Dust particles in the air scatter incoming radiation from the sun (Hassan, 2012). The dust can provide momentary coverage for the ground temperature but the atmospheric temperature will increase. This can disform and shorten the life time of clouds which can result in less rainfall.

Food insecurity

Global food security is being threatened by desertification. The more that population grows, the more food that has to be grown. The agricultural business is being displaced from one country to another. For example, Europe on average imports over 50% of its food. Meanwhile, 44% of agricultural land is located in dry lands and it supplies 60% of the world's food production. Desertification is decreasing the amount of sustainable land for agricultural uses but demands are continuously growing. In the near future, the demands will overcome the supply. The violent herder–farmer conflicts in Nigeria, Sudan, Mali and other countries in the Sahel region have been exacerbated by climate change, land degradation and population growth.

Increasing poverty

Wind erosion outside Leuchars

At least 90% of the inhabitants of drylands live in developing countries, where they also suffer from poor economic and social conditions. This situation is exacerbated by land degradation because of the reduction in productivity, the precariousness of living conditions and the difficulty of access to resources and opportunities.

Many underdeveloped countries are affected by overgrazing, land exhaustion and overdrafting of groundwater due to pressures to exploit marginal drylands for farming. Decision-makers are understandably averse to invest in arid zones with low potential. This absence of investment contributes to the marginalization of these zones. When unfavorable agri-climatic conditions are combined with an absence of infrastructure and access to markets, as well as poorly adapted production techniques and an underfed and undereducated population, most such zones are excluded from development.

Desertification often causes rural lands to become unable to support the same sized populations that previously lived there. This results in mass migrations out of rural areas and into urban areas particularly in Africa creating unemployment and slums. The number of these environmental refugees grows every year, with projections for sub-Saharan Africa showing a probable increase from 14 million in 2010 to nearly 200 million by 2050. This presents a future crisis for the region, as neighboring nations do not always have the ability to support large populations of refugees.

In Mongolia, the land is 90% fragile dry land, which causes many herders to migrate to the city for work. With very limited resources, the herders that stay on the dry land graze very carefully in order to preserve the land.

Agriculture is a main source of income for many desert communities. The increase in desertification in these regions has degraded the land to such an extent where people can no longer productively farm and make a profit. This has negatively impacted the economy and increased poverty rates.

There is, however, increased global advocacy e.g. the UN SDG 15 to combat desertification and restore affected lands.

Geographic areas affected

Drylands occupy approximately 40–41% of Earth's land area and are home to more than 2 billion people. It has been estimated that some 10–20% of drylands are already degraded, the total area affected by desertification being between 6 and 12 million square kilometers, that about 1–6% of the inhabitants of drylands live in desertified areas, and that a billion people are under threat from further desertification.

Sahel

The impact of climate change and human activities on desertification are exemplified in the Sahel region of Africa. The region is characterized by a dry hot climate, high temperatures and low rainfall (100–600 mm per year). So, droughts are the rule in the Sahel region. The Sahel has lost approximately 650,000 km2 of its productive agricultural land over the past 50 years; the propagation of desertification in this area is considerable.

Sahel region of Mali

The climate of the Sahara has undergone enormous variations over the last few hundred thousand years, oscillating between wet (grassland) and dry (desert) every 20,000 years (a phenomenon believed to be caused by long-term changes in the North African climate cycle that alters the path of the North African Monsoon, caused by an approximately 40,000-year cycle in which the axial tilt of the earth changes between 22° and 24.5°). Some statistics have shown that, since 1900, the Sahara has expanded by 250 km to the south over a stretch of land from west to east 6,000 km long.

Lake Chad, located in the Sahel region, has undergone desiccation due to water withdrawal for irrigation and decrease in rainfall. The lake has shrunk by over 90% since 1987, displacing millions of inhabitants. Recent efforts have managed to make some progress toward its restoration, but it is still considered to be at risk of disappearing entirely.

To limit desertification, the Great Green Wall (Africa) initiative was started in 2007 involving the planting of vegetation along a stretch of 7,775 km, 15 km wide, involving 22 countries to 2030. The purpose of this mammoth planting initiative is to enhance retention of water in the ground following the seasonal rainfall, thus promoting land rehabilitation and future agriculture. Senegal has already contributed to the project by planting 50,000 acres of trees. It is said to have improved land quality and caused an increase in economic opportunity in the region.

Gobi Desert and Mongolia

Another major area that is being impacted by desertification is the Gobi Desert located in Northern China and Southern Mongolia. The Gobi Desert is the fastest expanding desert on Earth, as it transforms over 3,600 square kilometres (1,400 square miles) of grassland into wasteland annually. Although the Gobi Desert itself is still a distance away from Beijing, reports from field studies state there are large sand dunes forming only 70 km (43.5 mi) outside the city.

In Mongolia, around 90% of grassland is considered vulnerable to desertification by the UN. An estimated 13% of desertification in Mongolia is caused by natural factors; the rest is due to human influence particularly overgrazing and increased erosion of soils in cultivated areas. During the period 1940 to 2015, the mean air temperature increased by 2.24 °C.[67] The warmest ten-year period was during the latest decade to 2021. Precipitation has decreased by 7% over this period resulting in increased arid conditions throughout Mongolia. The Gobi desert continues to expand northward, with over 70% of Mongolia's land degraded through overgrazing, deforestation, and climate change. In addition, the Mongolia government has listed forest fires, blights, unsustainable forestry and mining activities as leading causes of desertification in the country. The transition from sheep to goat farming in order to meet export demands for cashmere wool has caused degradation of grazing lands. Compared to sheep, goats do more damage to grazing lands by eating roots and flowers.

The Gobi Desert is expanding through desertification, most rapidly on the southern edge into China, which is seeing 3,600 km2 (1,390 sq mi) of grassland overtaken every year. Dust storms increased in frequency between 1996 and 2016, causing further damage to China's agriculture economy. However, in some areas desertification has been slowed or reversed.

The northern and eastern boundaries between desert and grassland are constantly changing. This is mostly due to the climate conditions before the growing season, which influence the rate of evapotranspiration and subsequent plant growth.

The expansion of the Gobi is attributed mostly to human activities, locally driven by deforestation, overgrazing, and depletion of water resources, as well as to climate change.

China has tried various plans to slow the expansion of the desert, which have met with some success. The Three-North Shelter Forest Program (or "Green Great Wall") is a Chinese government tree-planting project begun in 1978 and set to continue through 2050. The goal of the program is to reverse desertification by planting aspen and other fast-growing trees on some 36.5 million hectares across some 551 counties in 12 provinces of northern China.

South America

South America is another area vulnerable by desertification, as 25% of the land is classified as drylands and over 68% of the land area has undergone soil erosion as a result of deforestation and overgrazing. 27 to 43% of the land areas in Bolivia, Chile, Ecuador and Peru are at risk due to desertification. In Argentina, Mexico and Paraguay, greater than half the land area is degraded by desertification and cannot be used for agriculture. In Central America, drought has caused increased unemployment and decreased food security - also causing migration of people. Similar impacts have been seen in rural parts of Mexico where about 1,000 km2 of land have been lost yearly due to desertification. In Argentina, desertification has the potential to disrupt the nation's food supply.

Reversing desertification

A 2018 meeting in New Delhi related to the United Nations Convention to Combat Desertification
Anti-sand shields in north Sahara, Tunisia
Jojoba plantations, such as those shown, have played a role in combating edge effects of desertification in the Thar Desert, India.
Saxaul planted along roads in Xinjiang near Cherchen to slow desertification

Techniques and countermeasures exist for mitigating or reversing desertification. For some of these measures, there are numerous barriers to their implementation. Yet for others, the solution simply requires the exercise of human reason.

One proposed barrier is that the costs of adopting sustainable agricultural practices sometimes exceed the benefits for individual farmers, even while they are socially and environmentally beneficial. Another issue is a lack of political will, and lack of funding to support land reclamation and anti-desertification programs.

Desertification is recognized as a major threat to biodiversity. Some countries have developed biodiversity action plans to counter its effects, particularly in relation to the protection of endangered flora and fauna.

Improving soil quality

Techniques focus on two aspects: provisioning of water, and fixation and hyper-fertilizing soil. Fixating the soil is often done through the use of shelter belts, woodlots and windbreaks. Windbreaks are made from trees and bushes and are used to reduce soil erosion and evapotranspiration. They were widely encouraged by development agencies from the middle of the 1980s in the Sahel area of Africa.

Some soils (for example, clay), due to lack of water can become consolidated rather than porous (as in the case of sandy soils). Some techniques as zaï or tillage are then used to still allow the planting of crops.

Another technique that is useful is contour trenching. This involves the digging of 150 m long, 1 m deep trenches in the soil. The trenches are made parallel to the height lines of the landscape, preventing the water from flowing within the trenches and causing erosion. Stone walls are placed around the trenches to prevent the trenches from closing up again. This method was invented by Peter Westerveld.

Enriching of the soil and restoration of its fertility is often achieved by plants. Of these, leguminous plants which extract nitrogen from the air and fix it in the soil, succulents (such as Opuntia), and food crops/trees as grains, barley, beans and dates are the most important. Sand fences can also be used to control drifting of soil and sand erosion.

Another way to restore soil fertility is through the use of nitrogen-rich fertilizer. Due to the higher cost of this fertilizer, many smallholder farmers are reluctant to use it, especially in areas where subsistence farming is common. Several nations, including India, Zambia, and Malawi have responded to this by implementing subsidies to help encourage adoption of this technique.

Some research centres (such as Bel-Air Research Center IRD/ISRA/UCAD) are also experimenting with the inoculation of tree species with mycorrhiza in arid zones. The mycorrhiza are basically fungi attaching themselves to the roots of the plants. They hereby create a symbiotic relation with the trees, increasing the surface area of the tree's roots greatly (allowing the tree to gather much more nutrient from the soil).

The bioengineering of soil microbes, particularly photosynthesizers, has also been suggested and theoretically modeled as a method to protect drylands. The aim would be to enhance the existing cooperative loops between soil microbes and vegetation.

Desert greening

As there are many different types of deserts, there are also different types of desert reclamation methodologies. An example for this is the salt flats in the Rub' al Khali desert in Saudi Arabia. These salt flats are one of the most promising desert areas for seawater agriculture and could be revitalized without the use of freshwater or much energy.

Farmer-managed natural regeneration (FMNR) is another technique that has produced successful results for desert reclamation. Since 1980, this method to reforest degraded landscape has been applied with some success in Niger. This simple and low-cost method has enabled farmers to regenerate some 30,000 square kilometers in Niger. The process involves enabling native sprouting tree growth through selective pruning of shrub shoots. The residue from pruned trees can be used to provide mulching for fields thus increasing soil water retention and reducing evaporation. Additionally, properly spaced and pruned trees can increase crop yields. The Humbo Assisted Regeneration Project which uses FMNR techniques in Ethiopia has received money from The World Bank's BioCarbon Fund, which supports projects that sequester or conserve carbon in forests or agricultural ecosystems.

Better managed grazing

Restored grasslands store CO2 from the atmosphere as organic plant material. Grazing livestock, usually not left to wander, consume the grass and minimize its growth. A method proposed to restore grasslands uses fences with many small paddocks, moving herds from one paddock to another after a day or two in order to mimic natural grazers and allowing the grass to grow optimally. Proponents of managed grazing methods estimate that increasing this method could increase carbon content of the soils in the world's 3.5 billion hectares of agricultural grassland and offset nearly 12 years of CO2 emissions.

Planting trees

Reforestation gets at one of the root causes of desertification and is not just a treatment of the symptoms. Environmental organizations work in places where deforestation and desertification are contributing to extreme poverty. There they focus primarily on educating the local population about the dangers of deforestation and sometimes employ them to grow seedlings, which they transfer to severely deforested areas during the rainy season. The Food and Agriculture Organization of the United Nations launched the FAO Drylands Restoration Initiative in 2012 to draw together knowledge and experience on dryland restoration. In 2015, FAO published global guidelines for the restoration of degraded forests and landscapes in drylands, in collaboration with the Turkish Ministry of Forestry and Water Affairs and the Turkish Cooperation and Coordination Agency.

The "Green Wall of China" is a high-profile example of one method that has been finding success in this battle with desertification. This wall is a much larger-scale version of what American farmers did in the 1930s to stop the great Midwest dust bowl. This plan was proposed in the late 1970s, and has become a major ecological engineering project that is not predicted to end until the year 2055. According to Chinese reports, there have been nearly 66 billion trees planted in China's great green wall. The green wall of China has decreased desert land in China by an annual average of 1,980 square km. The frequency of sandstorms nationwide have fallen 20% due to the green wall. Due to the success that China has been finding in stopping the spread of desertification, plans are currently being made in Africa to start a "wall" along the borders of the Sahara desert as well to be financed by the United Nations Global Environment Facility trust.

The Great Green Wall, participating countries and Sahel. In September 2020, it was reported that the GGW had covered only 4% of the planned area.

In 2007 the African Union started the Great Green Wall of Africa project in order to combat desertification in 20 countries. The wall is 8,000 km wide, stretching across the entire width of the continent and has 8 billion dollars in support of the project. The project has restored 36 million hectares of land, and by 2030 the initiative plans to restore a total of 100 million hectares. The Great Green Wall has created many job opportunities for the participating countries, with over 20,000 jobs created in Nigeria alone.

History

The world's most noted deserts have been formed by natural processes interacting over long intervals of time. During most of these times, deserts have grown and shrunk independently of human activities. Paleodeserts are large sand seas now inactive because they are stabilized by vegetation, some extending beyond the present margins of core deserts, such as the Sahara, the largest hot desert.

Historical evidence shows that the serious and extensive land deterioration occurring several centuries ago in arid regions had three centers: the Mediterranean, the Mesopotamian Valley, and the Loess Plateau of China, where population was dense.

The earliest known discussion of the topic arose soon after the French colonization of West Africa, when the Comité d'Etudes commissioned a study on desséchement progressif to explore the prehistoric expansion of the Sahara Desert. The modern study of desertification emerged from the study of the 1980s drought in the Sahel.

Thursday, June 13, 2024

Cytokine

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cytokine
3D medical animation still showing secretion of cytokines

Cytokines are a broad and loose category of small proteins (~5–25 kDa) important in cell signaling. Due to their size, cytokines cannot cross the lipid bilayer of cells to enter the cytoplasm and therefore typically exert their functions by interacting with specific cytokine receptors on the target cell surface. Cytokines have been shown to be involved in autocrine, paracrine and endocrine signaling as immunomodulating agents.

Cytokines include chemokines, interferons, interleukins, lymphokines, and tumour necrosis factors, but generally not hormones or growth factors (despite some overlap in the terminology). Cytokines are produced by a broad range of cells, including immune cells like macrophages, B lymphocytes, T lymphocytes and mast cells, as well as endothelial cells, fibroblasts, and various stromal cells; a given cytokine may be produced by more than one type of cell. They act through cell surface receptors and are especially important in the immune system; cytokines modulate the balance between humoral and cell-based immune responses, and they regulate the maturation, growth, and responsiveness of particular cell populations. Some cytokines enhance or inhibit the action of other cytokines in complex ways. They are different from hormones, which are also important cell signaling molecules. Hormones circulate in higher concentrations, and tend to be made by specific kinds of cells. Cytokines are important in health and disease, specifically in host immune responses to infection, inflammation, trauma, sepsis, cancer, and reproduction.

The word comes from the ancient Greek language: cyto, from Greek κύτος, kytos, 'cavity, cell' + kines, from Greek κίνησις, kinēsis, 'movement'.

Discovery

Interferon-alpha, an interferon type I, was identified in 1957 as a protein that interfered with viral replication. The activity of interferon-gamma (the sole member of the interferon type II class) was described in 1965; this was the first identified lymphocyte-derived mediator. Macrophage migration inhibitory factor (MIF) was identified simultaneously in 1966 by John David and Barry Bloom.

In 1969, Dudley Dumonde proposed the term "lymphokine" to describe proteins secreted from lymphocytes and later, proteins derived from macrophages and monocytes in culture were called "monokines". In 1974, pathologist Stanley Cohen, M.D. (not to be confused with the Nobel laureate named Stanley Cohen, who was a PhD biochemist; nor with the MD geneticist Stanley Norman Cohen) published an article describing the production of MIF in virus-infected allantoic membrane and kidney cells, showing its production is not limited to immune cells. This led to his proposal of the term cytokine. In 1993, Ogawa described the early acting growth factors, intermediate acting growth factors and late acting growth factors.

Difference from hormones

Classic hormones circulate in aqueous solution in nanomolar (10-9 M) concentrations that usually vary by less than one order of magnitude. In contrast, some cytokines (such as IL-6) circulate in picomolar (10-12 M) concentrations that can increase up to 1,000 times during trauma or infection. The widespread distribution of cellular sources for cytokines may be a feature that differentiates them from hormones. Virtually all nucleated cells, but especially endo/epithelial cells and resident macrophages (many near the interface with the external environment) are potent producers of IL-1, IL-6, and TNF-α. In contrast, classic hormones, such as insulin, are secreted from discrete glands such as the pancreas. The current terminology refers to cytokines as immunomodulating agents.

A contributing factor to the difficulty of distinguishing cytokines from hormones is that some immunomodulating effects of cytokines are systemic (i.e., affecting the whole organism) rather than local. For instance, to accurately utilize hormone terminology, cytokines may be autocrine or paracrine in nature, and chemotaxis, chemokinesis and endocrine as a pyrogen. Essentially, cytokines are not limited to their immunomodulatory status as molecules.

A scalable vector graphic of signal transduction pathways
Cytokines typically activate second messenger systems, like JAK-STAT pathways, as illustrated on the left side of the diagram. Conversely, hormones typically activate different signaling pathways, like G protein-coupled receptors, seen at the top of the figure.

Nomenclature

Cytokines have been classed as lymphokines, interleukins, and chemokines, based on their presumed cell of secretion, function, or target of action. Because cytokines are characterised by considerable redundancy and pleiotropism, such distinctions, allowing for exceptions, are obsolete.

  • The term interleukin was initially used by researchers for those cytokines whose presumed targets are principally white blood cells (leukocytes). It is now used largely for designation of newer cytokine molecules and bears little relation to their presumed function. The vast majority of these are produced by T-helper cells.
  • Lymphokines: produced by lymphocytes
  • Monokines: produced exclusively by monocytes
  • Interferons: involved in antiviral responses
  • Colony stimulating factors: support the growth of cells in semisolid media
  • Chemokines: mediate chemoattraction (chemotaxis) between cells.

Classification

Structural

Structural homogeneity has been able to partially distinguish between cytokines that do not demonstrate a considerable degree of redundancy so that they can be classified into four types:

  1. the IL-2 subfamily. This is the largest family. It contains several non-immunological cytokines including erythropoietin (EPO) and thrombopoietin (TPO). They can be grouped into long-chain and short-chain cytokines by topology. Some members share the common gamma chain as part of their receptor.
  2. the interferon (IFN) subfamily.
  3. the IL-10 subfamily.

Functional

A classification that proves more useful in clinical and experimental practice outside of structural biology divides immunological cytokines into those that enhance cellular immune responses, type 1 (TNFα, IFN-γ, etc.), and those that enhance antibody responses, type 2 (TGF-β, IL-4, IL-10, IL-13, etc.). A key focus of interest has been that cytokines in one of these two sub-sets tend to inhibit the effects of those in the other. Dysregulation of this tendency is under intensive study for its possible role in the pathogenesis of autoimmune disorders. Several inflammatory cytokines are induced by oxidative stress. The fact that cytokines themselves trigger the release of other cytokines and also lead to increased oxidative stress makes them important in chronic inflammation, as well as other immunoresponses, such as fever and acute phase proteins of the liver (IL-1,6,12, IFN-a). Cytokines also play a role in anti-inflammatory pathways and are a possible therapeutic treatment for pathological pain from inflammation or peripheral nerve injury. There are both pro-inflammatory and anti-inflammatory cytokines that regulate this pathway.

Receptors

In recent years, the cytokine receptors have come to demand the attention of more investigators than cytokines themselves, partly because of their remarkable characteristics and partly because a deficiency of cytokine receptors has now been directly linked to certain debilitating immunodeficiency states. In this regard, and also because the redundancy and pleomorphism of cytokines are, in fact, a consequence of their homologous receptors, many authorities think that a classification of cytokine receptors would be more clinically and experimentally useful.

A classification of cytokine receptors based on their three-dimensional structure has, therefore, been attempted. Such a classification, though seemingly cumbersome, provides several unique perspectives for attractive pharmacotherapeutic targets.

  • Immunoglobulin (Ig) superfamily, which are ubiquitously present throughout several cells and tissues of the vertebrate body, and share structural homology with immunoglobulins (antibodies), cell adhesion molecules, and even some cytokines. Examples: IL-1 receptor types.
  • Hemopoietic Growth Factor (type 1) family, whose members have certain conserved motifs in their extracellular amino-acid domain. The IL-2 receptor belongs to this chain, whose γ-chain (common to several other cytokines) deficiency is directly responsible for the x-linked form of Severe Combined Immunodeficiency (X-SCID).
  • Interferon (type 2) family, whose members are receptors for IFN β and γ.
  • Tumor necrosis factors (TNF) (type 3) family, whose members share a cysteine-rich common extracellular binding domain, and includes several other non-cytokine ligands like CD40, CD27 and CD30, besides the ligands on which the family is named.
  • Seven transmembrane helix family, the ubiquitous receptor type of the animal kingdom. All G protein-coupled receptors (for hormones and neurotransmitters) belong to this family. Chemokine receptors, two of which act as binding proteins for HIV (CD4 and CCR5), also belong to this family.
  • Interleukin-17 receptor (IL-17R) family, which shows little homology with any other cytokine receptor family. Structural motifs conserved between members of this family include: an extracellular fibronectin III-like domain, a transmembrane domain and a cytoplasmic SERIF domain. The known members of this family are as follows: IL-17RA, IL-17RB, IL-17RC, IL17RD and IL-17RE.

Cellular effects

Each cytokine has a matching cell-surface receptor. Subsequent cascades of intracellular signaling then alter cell functions. This may include the upregulation and/or downregulation of several genes and their transcription factors, resulting in the production of other cytokines, an increase in the number of surface receptors for other molecules, or the suppression of their own effect by feedback inhibition. The effect of a particular cytokine on a given cell depends on the cytokine, its extracellular abundance, the presence and abundance of the complementary receptor on the cell surface, and downstream signals activated by receptor binding; these last two factors can vary by cell type. Cytokines are characterized by considerable redundancy, in that many cytokines appear to share similar functions. It seems to be a paradox that cytokines binding to antibodies have a stronger immune effect than the cytokine alone. This may lead to lower therapeutic doses.

It has been shown that inflammatory cytokines cause an IL-10-dependent inhibition of T-cell expansion and function by up-regulating PD-1 levels on monocytes, which leads to IL-10 production by monocytes after binding of PD-1 by PD-L. Adverse reactions to cytokines are characterized by local inflammation and/or ulceration at the injection sites. Occasionally such reactions are seen with more widespread papular eruptions.

Roles in health and disease

Cytokines are involved in several developmental processes during embryonic development. Cytokines are released from the blastocyst, and are also expressed in the endometrium, and have critical roles in the stages of zona hatching, and implantation. Cytokines are crucial for fighting off infections and in other immune responses. However, they can become dysregulated and pathological in inflammation, trauma, sepsis, and hemorrhagic stroke. Dysregulated cytokine secretion in the aged population can lead to inflammaging, and render these individuals more vulnerable to age-related diseases like neurodegenerative diseases and type 2 diabetes.

A 2019 review was inconclusive as to whether cytokines play any definitive role in ME/CFS.

Adverse effects

Adverse effects of cytokines have been linked to many disease states and conditions ranging from schizophrenia, major depression and Alzheimer's disease to cancer. T regulatory cells (Tregs) and related-cytokines are effectively engaged in the process of tumor immune escape and functionally inhibit immune response against the tumor. Forkhead box protein 3 (Foxp3) as a transcription factor is an essential molecular marker of Treg cells. Foxp3 polymorphism (rs3761548) might be involved in cancer progression like gastric cancer through influencing Tregs function and the secretion of immunomodulatory cytokines such as IL-10, IL-35, and TGF-β. Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted cytokines; disruption of normal feedback mechanisms in cancer threatens tissue integrity.

Over-secretion of cytokines can trigger a dangerous cytokine storm syndrome. Cytokine storms may have been the cause of severe adverse events during a clinical trial of TGN1412. Cytokine storms are also suspected to be the main cause of death in the 1918 "Spanish Flu" pandemic. Deaths were weighted more heavily towards people with healthy immune systems, because of their ability to produce stronger immune responses, with dramatic increases in cytokine levels. Another example of cytokine storm is seen in acute pancreatitis. Cytokines are integral and implicated in all angles of the cascade, resulting in the systemic inflammatory response syndrome and multi-organ failure associated with this intra-abdominal catastrophe. In the COVID-19 pandemic, some deaths from COVID-19 have been attributable to cytokine release storms. Current data suggest cytokine storms may be the source of extensive lung tissue damage and dysfunctional coagulation in COVID-19 infections.

Medical use as drugs

Some cytokines have been developed into protein therapeutics using recombinant DNA technology. Recombinant cytokines being used as drugs as of 2014 include:

Emerging technologies

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Emerging_technologies   Emerging tec...