Search This Blog

Tuesday, August 23, 2022

International trade

From Wikipedia, the free encyclopedia

International trade is the exchange of capital, goods, and services across international borders or territories because there is a need or want of goods or services.

In most countries, such trade represents a significant share of gross domestic product (GDP). While international trade has existed throughout history (for example Uttarapatha, Silk Road, Amber Road, scramble for Africa, Atlantic slave trade, salt roads), its economic, social, and political importance has been on the rise in recent centuries.

Carrying out trade at an international level is a complex process when compared to domestic trade. When trade takes place between two or more states factors like currency, government policies, economy, judicial system, laws, and markets influence trade.

To ease and justify the process of trade between countries of different economic standing in the modern era, some international economic organizations were formed, such as the World Trade Organization. These organizations work towards the facilitation and growth of international trade. Statistical services of intergovernmental and supranational organizations and governmental statistical agencies publish official statistics on international trade.

Characteristics of global trade

A product that is transferred or sold from a party in one country to a party in another country is an export from the originating country, and an import to the country receiving that product. Imports and exports are accounted for in a country's current account in the balance of payments.

Trading globally may give consumers and countries the opportunity to be exposed to new markets and products. Almost every kind of product can be found in the international market, for example: food, clothes, spare parts, oil, jewellery, wine, stocks, currencies, and water. Services are also traded, such as in tourism, banking, consulting, and transportation.

The ancient Silk Road trade routes across Eurasia.

Advanced technology (including transportation), globalization, industrialization, outsourcing and multinational corporations have major impacts on the international trade system.

Differences from domestic trade

Ports play an important role in facilitating international trade. The Port of New York and New Jersey grew from the original harbor at the convergence of the Hudson River and the East River at the Upper New York Bay.

International trade is, in principle, not different from domestic trade as the motivation and the behavior of parties involved in a trade do not change fundamentally regardless of whether trade is across a border or not.

However, in practical terms, carrying out trade at an international level is typically a more complex process than domestic trade. The main difference is that international trade is typically more costly than domestic trade. This is due to the fact that cross-border trade typically incurs additional costs such as explicit tariffs as well as explicit or implicit non-tariff barriers such as time costs (due to border delays), language and cultural differences, product safety, the legal system, and so on.

Another difference between domestic and international trade is that factors of production such as capital and labor are often more mobile within a country than across countries. Thus, international trade is mostly restricted to trade in goods and services, and only to a lesser extent to trade in capital, labour, or other factors of production. Trade in goods and services can serve as a substitute for trade in factors of production. Instead of importing a factor of production, a country can import goods that make intensive use of that factor of production and thus embody it. An example of this is the import of labor-intensive goods by the United States from China. Instead of importing Chinese labor, the United States imports goods that were produced with Chinese labor. One report in 2010, suggested that international trade was increased when a country hosted a network of immigrants, but the trade effect was weakened when the immigrants became assimilated into their new country.

History

The history of international trade chronicles notable events that have affected trading among various economies.

Theories and models

There are several models that seek to explain the factors behind international trade, the welfare consequences of trade and the pattern of trade.

Most traded export products

Most traded export products.png

Largest countries or regions by total international trade

Volume of world merchandise exports
 

The following table is a list of the 21 largest trading states according to the World Trade Organization.

Rank State International trade of
goods (billions of USD)
International trade of
services (billions of USD)
Total international trade
of goods and services
(billions of USD)
World 32,430 9,635 42,065
 European Union 3,821 1,604 5,425
1  United States 3,706 1,215 4,921
2  China 3,686 656 4,342
3  Germany 2,626 740 3,366
4  United Kingdom 1,066 571 1,637
5  Japan 1,250 350 1,600
6  France 1,074 470 1,544
7  Netherlands 1,073 339 1,412
8  Hong Kong 1,064 172 1,236
9  South Korea 902 201 1,103
10  Italy 866 200 1,066
11  Canada 807 177 984
12  Belgium 763 212 975
13  India 623 294 917
13  Singapore 613 304 917
15  Mexico 771 53 824
16  Spain 596 198 794
17   Switzerland 572 207 779
18  Taiwan 511 93 604
19  Russia 473 122 595
20  Ireland 248 338 586
21  United Arab Emirates 491 92 583

Top traded commodities by value (exports)

Rank Commodity Value in US$('000) Date of
information
1 Mineral fuels, oils, distillation products, Agricultural Products (Tobacco,Wheat, Cotton, Corn) etc. $2,183,079,941 2015
2 Electrical, electronic equipment $1,833,534,414 2015
3 Machinery, nuclear reactors, boilers, etc. $1,763,371,813 2015
4 Vehicles (excluding railway) $1,076,830,856 2015
5 Plastics and articles thereof $470,226,676 2015
6 Optical, photo, technical, medical, etc. apparatus $465,101,524 2015
7 Pharmaceutical products $443,596,577 2015
8 Iron and steel $379,113,147 2015
9 Organic chemicals $377,462,088 2015
10 Pearls, precious stones, metals, coins, etc. $348,155,369 2015

Source: International Trade Centre

Observances

In the US, the various U.S. Presidents have held observances to promote big and small companies to be more involved with the export and import of goods and services. President George W. Bush observed World Trade Week on May 18, 2001, and May 17, 2002. On May 13, 2016, President Barack Obama proclaimed May 15 through May 21, 2016, World Trade Week, 2016. On May 19, 2017, President Donald Trump proclaimed May 21 through May 27, 2017, World Trade Week, 2017. World Trade Week is the third week of May. Every year the President declares that week to be World Trade Week.

International trade versus local production

Local food

In the case of food production trade-offs in forms of local food and distant food production are controversial with limited studies comparing environmental impact and scientists cautioning that regionally specific environmental impacts should be considered. Effects of local food on greenhouse gas emissions may vary per origin and target region of the production. According to the 2022 IPCC report on climate change, that in international trade net Carbon emissions has reduced between 2006 and 2016.  A 2020 study indicated that local food crop production alone cannot meet the demand for most food crops with "current production and consumption patterns" and the locations of food production at the time of the study for 72–89% of the global population and 100–km radiuses as of early 2020.[18] Studies found that food miles are a relatively minor factor of carbon emissions, albeit increased food localization may also enable additional, more significant, environmental benefits such as recycling of energy, water, and nutrients. For specific foods regional differences in harvest seasons may make it more environmentally friendly to import from distant regions than more local production and storage or local production in greenhouses.

Qualitative differences and economic aspects

Qualitative differences between substitutive products of different production regions may exist due to different legal requirements and quality standards or different levels of controllability by local production- and governance-systems which may have aspects of security beyond resource security, environmental protection, product quality and product design and health. The process of transforming supply as well as labor rights may differ as well.

Local production has been reported to increase local employment in many cases. A 2018 study claimed that international trade can increase local employment. A 2016 study found that local employment and total labor income in both manufacturing and nonmanufacturing were negatively affected by rising exposure to imports.

Local production in high-income countries, rather than distant regions may require higher wages for workers. Higher wages incentivize automation which could allow for automated workers' time to be reallocated by society and its economic mechanisms or be converted into leisure-like time.

Specialization, production efficiency and regional differences

Local production may require knowledge transfer, technology transfer and may not be able to compete in efficiency initially with specialized, established industries and businesses, or in consumer demand without policy measures such as eco-tariffs. Regional differences may cause specific regions to be more suitable for a specific production, thereby increasing the advantages of specific trade over specific local production. Forms of local products that are highly localized may not be able to meet the efficiency of more large-scale, highly consolidated production in terms of efficiency, including environmental impact.

Resource security

A systematic, and possibly first large-scale, cross-sectoral analysis of water, energy and land in security in 189 countries that links total and sectorial consumption to sources showed that countries and sectors are highly exposed to over-exploited, insecure, and degraded such resources with economic globalization having decreased security of global supply chains. The 2020 study finds that most countries exhibit greater exposure to resource risks via international trade – mainly from remote production sources – and that diversifying trading partners is unlikely to help countries and sectors to reduce these or to improve their resource self-sufficiency.

Illicit trade

Illegal gold trade

A number of people in Africa, including children, were using informal or “artisanal” methods to produce gold. While millions were making a livelihood through the small-scale mining, governments of Ghana, Tanzania and Zambia complaint about the increase in illegal production and gold smuggling. Sometimes the procedure involved criminal operations and even human and environmental cost. Investigative reports based on Africa’s export data revealed that gold in large quantities is smuggled out of the country through the United Arab Emirates, without any taxes being paid to the producing states. Analysis also reflected discrepancies in the amount exported from Africa and the total gold imported into the UAE.

In July 2020, a report by Swissaid highlighted that the Dubai-based precious metal refining firms, including Kaloti Jewellery International Group and Trust One Financial Services (T1FS), received most of their gold from poor African states like Sudan. The gold mines in Sudan were seldom under the militias involved in war crimes and human rights abuses. The Swissaid report also highlighted that the illicit gold coming into Dubai from Africa is imported in large quantities by the world’s largest refinery in Switzerland, Valcambi.

Another report in March 2022 revealed the contradiction between the lucrative gold trade of West African countries and the illicit dealings. Like Sudan, Democratic Republic of Congo (DRC), Ghana and other states, differences were recorded in the gold production in Mali and its trade with Dubai, UAE. The third largest gold exporter in Africa, Mali imposed taxes only on first 50kg gold exports per month, which allowed several small-scale miners to enjoy tax exemptions and smuggle gold worth millions. In 2014, Mali’s gold production was of 45.8 tonnes, while the UAE’s gold import were at 59.9 tonnes.

History of animal testing

From Wikipedia, the free encyclopedia
 
One of Pavlov’s dogs with a saliva-catch container and tube surgically implanted in its muzzle, Pavlov Museum, 2005

The history of animal testing goes back to the writings of the Ancient Greeks in the 4th and 3rd centuries BCE, with Aristotle (384–322 BCE) and Erasistratus (304–258 BCE) one of the first documented to perform experiments on nonhuman animals. Galen, a physician in 2nd-century Rome, dissected pigs and goats, and is known as the "Father of Vivisection." Avenzoar, an Arabic physician in 12th-century Moorish Spain who also practiced dissection, introduced animal testing as an experimental method of testing surgical procedures before applying them to human patients. Although the exact purpose of the procedure was unclear, a Neolithic surgeon performed trepanation on a cow in 3400-3000 BCE. This is the earliest known surgery to have been performed on an animal, and it is possible that the procedure was done on a dead cow in order for the surgeon to practice their skills.

History of animal testing

The mouse is a typical testing species.

In 1242, Ibn al-Nafis provided accurate descriptions of the circulation of blood in mammals. A complete description of this circulation was later provided in the 17th century by William Harvey.

In his unfinished 1627 utopian novel, New Atlantis, scientist and philosopher Francis Bacon proposed a research center containing "parks and enclosures of all sorts of beasts and birds which we use ... for dissections and trials; that thereby we may take light what may be wrought upon the body of man."

In the 1660s, the physicist Robert Boyle conducted many experiments with a pump to investigate the effects of rarefied air. He listed two experiments on living nonhuman animals: "Experiment 40", which tested the ability of insects to fly under reduced air pressure, and the dramatic "Experiment 41," which demonstrated the reliance of living creatures on the air for their survival. Boyle conducted numerous trials during which he placed a large variety of different nonhuman animals, including birds, mice, eels, snails and flies, in the vessel of the pump and studied their reactions as the air was removed. Here, he describes an injured lark:

…the Bird for a while appear'd lively enough; but upon a greater Exsuction of the Air, she began manifestly to droop and appear sick, and very soon after was taken with as violent and irregular Convulsions, as are wont to be observ'd in Poultry, when their heads are wrung off: For the Bird threw her self over and over two or three times, and dyed with her Breast upward, her Head downwards, and her Neck awry.

In the 18th century, Antoine Lavoisier decided to use a guinea pig in a calorimeter because he wanted to prove that respiration was a form of combustion. He had an impression that combustion and respiration are chemically identical. Lavoisier demonstrated this with the help of Pierre-Simon Laplace. They both carefully measured the amount of "carbon dioxide and heat given off by a guinea pig as (they) breathed". Then they contrasted this to "the amount of heat produced when they burned carbon to produce the same amount of carbon dioxide as had been exhaled by the guinea pig". Their conclusion made Lavoisier confident "that respiration is a form of combustion". Also, the result showed that the heat mammals produce through respiration allowed their bodies to be above room temperature.

Stephen Hales measured blood pressure in the horse. In the 1780s, Luigi Galvani demonstrated that electricity applied to a dead, dissected, frog's leg muscle caused it to twitch, which led to an appreciation for the relationship between electricity and animation. In the 1880s, Louis Pasteur convincingly demonstrated the germ theory of medicine by giving anthrax to sheep. In the 1890s, Ivan Pavlov famously used dogs to describe classical conditioning.

In 1921 Otto Loewi provided the first substantial evidence that neuronal communication with target cells occurred via chemical synapses. He extracted two hearts from frogs and left them beating in an ionic bath. He stimulated the attached Vagus nerve of the first heart and observed its beating slowed. When the second heart was placed in the ionic bath of the first, it also slowed.

In the 1920s, Edgar Adrian formulated the theory of neural communication that the frequency of action potentials, and not the size of the action potentials, was the basis for communicating the magnitude of the signal. His work was performed in an isolated frog nerve-muscle preparation. Adrian was awarded a Nobel Prize for his work.

In the 1960s David Hubel and Torsten Wiesel demonstrated the macro columnar organization of visual areas in cats and monkeys, and provided physiological evidence for the critical period for the development of disparity sensitivity in vision (i.e.: the main cue for depth perception), and were awarded a Nobel Prize for their work.

In 1996 Dolly the sheep was born, the first mammal to be cloned from an adult cell. The process by which Dolly the sheep was cloned utilized a process known as nuclear transfer applied by lead scientist Ian Wilmut. Although other scientists were not immediately able to replicate the experiment, Wilmut argued that the experiment was indeed repeatable, given a timeframe of over a year.

In 1997, innovations in frogs, Xenopus laevis, by developmental biologist Jonathan Slack of the University of Bath, created headless tadpoles, which could allow future applications in donor organ transplantation.

There has been growing concern about both the methodology and the care of animals in laboratories who are used in testing. There is increasing emphasis on more humane and compassionate treatment of other animals. Methodological concerns include factors that make animal study results less reproducible than intended. For example, a 2014 study from McGill University in Montreal, Canada suggests that mice handled by men rather than women showed higher stress levels.

In medicine

Early depictions of vivisection using pigs

In the 1880s and 1890s, Emil von Behring isolated the diphtheria toxin and demonstrated its effects in guinea pigs. He went on to demonstrate immunity against diphtheria in other animals in 1898 by injecting a mix of toxin and antitoxin. This work constituted in part the rationale for awarding von Behring the 1901 Nobel Prize in Physiology or Medicine. Roughly 15 years later, Behring announced such a mix suitable for human immunity which largely banished diphtheria from the scourges of humankind. The antitoxin is famously commemorated each year in the Iditarod race, which is modeled after the Nome in the 1925 serum run to Nome. The success of the animal studies in producing the diphtheria antitoxin are attributed by some as a cause of the decline of the early 20th century antivivisectionist movement in the USA.

In 1921, Frederick Banting tied up the pancreatic ducts of dogs and discovered that the isolates of pancreatic secretion could be used to keep dogs with diabetes alive. He followed up these experiments with the chemical isolation of insulin in 1922 with John Macleod. These experiments used bovine sources instead of dogs to improve the supply. The first person treated was Leonard Thompson, a 14-year-old diabetic who only weighed 65 pounds and was about to slip into a coma and die. After the first dose, the formulation had to be re-worked, a process that took 12 days. The second dose was effective. These two won the Nobel Prize in Physiology or Medicine in 1923 for their discovery of insulin and its treatment of diabetes mellitus. Thompson lived 13 more years taking insulin. Before insulin's clinical use, a diagnosis of diabetes mellitus meant death; Thompson had been diagnosed in 1919.

In 1943, Selman Waksman's laboratory discovered streptomycin using a series of screens to find antibacterial substances from the soil. Waksman coined the term antibiotic with regards to these substances. Waksman would win the Nobel Prize in Physiology or Medicine in 1952 for his discoveries in antibiotics. Corwin Hinshaw and William Feldman took the streptomycin samples and cured tuberculosis in four guinea pigs with it. Hinshaw followed these studies with human trials that provided a dramatic advance in the ability to stop and reverse the progression of tuberculosis. Mortality from tuberculosis in the UK has diminished from the early 20th century due to better hygiene and improved living standards, but from the moment antibiotics were introduced, the fall became steep so that by the 1980s mortality in developed countries was effectively zero.

In the 1940s, Jonas Salk used rhesus monkey cross-contamination studies to isolate the three forms of the polio virus that affected hundreds of thousands yearly. Salk's team created a vaccine against the strains of polio in cell cultures of rhesus monkey kidney cells. The vaccine was made publicly available in 1955 and reduced the incidence of polio 15-fold in the USA over the following five years. Albert Sabin made a superior "live" vaccine by passing the polio virus through animal hosts, including monkeys. The vaccine was produced for mass consumption in 1963 and is still in use today. It had virtually eradicated polio in the US by 1965. It has been estimated that 100,000 rhesus monkeys were killed in the course of developing the polio vaccines, and 65 doses of vaccine were produced from each monkey. Writing in the Winston-Salem Journal in 1992, Sabin said, "Without the use of nonhuman animals and human (animals), it would have been impossible to acquire the important knowledge needed to prevent much suffering and premature death not only among humans but (other) animals as well."

Also in the 1940s, John Cade tested lithium salts in guinea pigs in a search for pharmaceuticals with anticonvulsant properties. The nonhuman animals seemed calmer in their mood. He then tested lithium on himself, before using it to treat recurrent mania. The introduction of lithium revolutionized the treatment of manic-depressives by the 1970s. Prior to Cade's animal testing, manic-depressives were treated with a lobotomy or electro-convulsive therapy.

In the 1950s the first safer, volatile anaesthetic halothane was developed through studies on rodents, rabbits, dogs, cats and monkeys. This paved the way for a whole new generation of modern general anaesthetics – also developed by animal studies – without which modern, complex surgical operations would be virtually impossible.

In 1960, Albert Starr pioneered heart valve replacement surgery in humans after a series of surgical advances in dogs. He received the Lasker Medical Award in 2007 for his efforts, along with Alain Carpentier. In 1968 Carpentier made heart valve replacements from the heart valves of pigs, which are pre-treated with glutaraldehyd to blunt immune response. Over 300,000 people receive heart valve replacements derived from Starr and Carpentier's designs annually. Carpentier said of Starr's initial advances "Before his prosthetic, patients with valvular disease would die."

In the 1970s, leprosy multi-drug antibiotic treatments were refined using leprosy bacteria grown in armadillos and were then tested in human clinical trials. Today, the nine-banded armadillo is still used to culture the bacteria that causes leprosy, for studies of the proteomics and genomics (the genome was completed in 1998) of the bacteria, for improving therapy and developing vaccines. Leprosy is still prevalent in Brazil, Madagascar, Mozambique, Tanzania, India, and Nepal, with over 400,000 cases at the beginning of 2004. The bacteria has not yet been cultured in vitro with success necessary to develop drug treatments or vaccines, and mice and armadillos have been the sources of the bacteria for research.

The non-human primate models of AIDS, using HIV-2, SHIV, and SIV in macaques, have been used as a complement to ongoing research efforts against the virus. The drug tenofovir has had its efficacy and toxicology evaluated in macaques and found long-term/high-dose treatments had adverse effects not found using short-term/high-dose treatment followed by long-term/low-dose treatment. This finding in macaques was translated into human dosing regimens. Prophylactic treatment with anti-virals has been evaluated in macaques because an introduction of the virus can only be controlled in an animal model. The finding that prophylaxis can be effective at blocking infection has altered the treatment for occupational exposures, such as needle exposures. Such exposures are now followed rapidly with anti-HIV drugs, and this practice has resulted in measurable transient virus infection similar to the NHP model. Similarly, the mother-to-fetus transmission, and its fetal prophylaxis with antivirals such as tenofovir and AZT, has been evaluated in controlled testing in macaques not possible in humans, and this knowledge has guided antiviral treatment in pregnant mothers with HIV. "The comparison and correlation of results obtained in monkey and human studies are leading to a growing validation and recognition of the relevance of the animal model. Although each animal model has its limitations, carefully designed drug studies in nonhuman primates can continue to advance our scientific knowledge and guide future clinical trials."

Throughout the 20th century, research that used live nonhuman animals has led to many other medical advances and treatments for human diseases, such as: organ transplant techniques and anti-transplant rejection medications, the heart-lung machine, antibiotics like penicillin, and whooping cough vaccine.

Presently, animal experimentation continues to be used in research that aims to solve medical problems including Alzheimer's disease, multiple sclerosis, spinal cord injury, and many more conditions in which there is no useful in vitro model system available.

Veterinary advances

A veterinary surgeon at work with a cat

Animal testing for veterinary studies accounts for around five percent of research using other animals. Treatments to each of the following animal diseases have been derived from animal studies: rabies, anthrax, glanders, Feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, Classical swine fever (hog cholera), Heartworm and other parasitic infections.

Testing other animals for rabies do require the animal to be dead, and it takes two hours to conduct the test.

Basic and applied research in veterinary medicine continues in varied topics, such as searching for improved treatments and vaccines for feline leukemia virus and improving veterinary oncology.

Early debate

The ethical implications of using animals for testing has been a heated debate in regards to the humane treatment that is used.
 

In 1655, physiologist Edmund O'Meara was recorded as saying that "the miserable torture of vivisection places the body in an unnatural state." O'Meara thus expressed one of the chief scientific objections to vivisection: that the pain that the individual endured would interfere with the accuracy of the results.

In 1822, the first animal protection law was enacted in the British parliament, followed by the Cruelty to Animals Act (1876), the first law specifically aimed at regulating animal testing. The legislation was promoted by Charles Darwin, who wrote to Ray Lankester in March 1871:

You ask about my opinion on vivisection. I quite agree that it is justifiable for real investigations on physiology; but not for mere damnable and detestable curiosity. It is a subject which makes me sick with horror, so I will not say another word about it, else I shall not sleep to-night."

Opposition to the use of nonhuman animals in medical research arose in the United States during the 1860s, when Henry Bergh founded the American Society for the Prevention of Cruelty to Animals (ASPCA), with America's first specifically anti-vivisection organization being the American AntiVivisection Society (AAVS), founded in 1883.

In the UK, an article in the Medical Times and Gazette on April 28, 1877, indicates that anti-vivisectionist campaigners, mainly clergymen, had prepared a number of posters entitled, "This is vivisection," "This is a living dog," and "This is a living rabbit," depicting nonhuman animals in a poses that they said copied the work of Elias von Cyon in St. Petersburg, though the article says the images differ from the originals. It states that no more than 10 or a dozen men were actively involved in animal testing on living nonhuman animals in the UK at that time.

Antivivisectionists of the era believed the spread of mercy was the great cause of civilization, and vivisection was cruel. However, in the U.S., the antivivisectionists' efforts were defeated in every legislature because of the widespread support of an informed public for the careful and judicious use of other animals. The early antivivisectionist movement in the U.S. dwindled greatly in the 1920s. Overall, this movement had no US legislative success. The passing of the Laboratory Animal Welfare Act, in 1999 was more focused on protecting the welfare of other animals who are used in all fields, including research, food production, consumer product development, etc.

On the other side of the debate, those in favor of nonhuman-animal testing held that experiments on other animals were necessary to advance medical and biological knowledge and to ensure the safety of products intended for human and animal use. In 1831, the founders of the Dublin Zoo—the fourth oldest zoo in Europe, after Vienna, Paris, and London—were members of the medical profession, interested in studying the individuals both while they were alive and when they were dead. Claude Bernard, known as the "prince of vivisectors" and the father of physiology—whose wife, Marie Françoise Martin, founded the first anti-vivisection society in France in 1883—famously wrote in 1865 that "the science of life is a superb and dazzlingly lighted hall which may be reached only by passing through a long and ghastly kitchen." Arguing that "experiments on (nonhuman) animals...are entirely conclusive for the toxicology and hygiene of man...the effects of these substances are the same on man as on (other) animals, save for differences in degree," Bernard established animal experimentation as part of the standard scientific method. In 1896, the physiologist and physician Dr. Walter B. Cannon said "The antivivisectionists are the second of the two types Theodore Roosevelt described when he said, 'Common sense without conscience may lead to crime, but conscience without common sense may lead to folly, which is the handmaiden of crime.'" These divisions between pro- and anti- animal testing groups first came to public attention during the brown dog affair in the early 20th century, when hundreds of medical students clashed with anti-vivisectionists and police over a memorial to a vivisected dog.

Variable renewable energy

From Wikipedia, the free encyclopedia
 
The 150 MW Andasol solar power station is a commercial parabolic trough solar thermal power plant, in Spain. The Andasol plant uses tanks of molten salt to store solar energy so that it can continue generating electricity even after sunset.
 
Grids with high penetration of renewable energy sources generally need more flexible generation rather than baseload generation

Variable renewable energy (VRE) or intermittent renewable energy sources (IRES) are renewable energy sources that are not dispatchable due to their fluctuating nature, such as wind power and solar power, as opposed to controllable renewable energy sources, such as dammed hydroelectricity or biomass, or relatively constant sources, such as geothermal power.

The use of small amounts of intermittent power has little effect on grid operations. Using larger amounts of intermittent power may require upgrades or even a redesign of the grid infrastructure. Options to absorb large shares of variable energy into the grid include using storage, improved interconnection between different variable sources to smooth out supply, using dispatchable energy sources such as hydroelectricity and having overcapacity, so that sufficient energy is produced even when weather is less favourable. More connections between the energy sector and the building, transport and industrial sectors may also help.

Background and terminology

The penetration of intermittent renewables in most power grids is low: global electricity generation in 2021 was 7% wind and 4% solar. However, in 2021 Denmark, Luxembourg and Uruguay generated over 40% of their electricity from wind and solar. Characteristics of variable renewables include their unpredictability, variability, their low running costs and the fact they are constrained to a certain location. These provide a challenge to grid operators, who must make sure supply and demand are matched. Solutions include energy storage, demand response, availability of overcapacity and sector coupling. Smaller isolated grids may be less tolerant to high levels of penetration.

Matching power demand to supply is not a problem specific to intermittent power sources. Existing power grids already contain elements of uncertainty including sudden and large changes in demand and unforeseen power plant failures. Though power grids are already designed to have some capacity in excess of projected peak demand to deal with these problems, significant upgrades may be required to accommodate large amounts of intermittent power.

Several key terms are useful for understanding the issue of intermittent power sources. These terms are not standardized, and variations may be used. Most of these terms also apply to traditional power plants.

  • Intermittency or variability is the extent to which a power source fluctuates. This has two aspects: a predictable variability (such as the day-night cycle) and an unpredictable part (imperfect local weather forecasting). The term intermittent can be used to refer to the unpredictable part, with variable then referring to the predictable part.
  • Dispatchability is the ability of a given power source to increase and decrease output quickly on demand. The concept is distinct from intermittency; dispatchability is one of several ways system operators match supply (generator's output) to system demand (technical loads).
  • Penetration is the amount of electricity generated as a percentage of annual consumption.
  • Nominal power or nameplate capacity is the maximum output of a generating plant in normal operating conditions. This is the most common number used and is typically expressed in watts (including multiples like kW, MW, GW).
  • Capacity factor, average capacity factor, or load factor is the average expected output of a generator, usually over an annual period. It is expressed as a percentage of the nameplate capacity or in decimal form (e.g. 30% or 0.30).
  • Firm capacity or firm power is "guaranteed by the supplier to be available at all times during a period covered by a commitment".
  • Capacity credit: the amount of conventional (dispatchable) generation power that can be potentially removed from the system while keeping the reliability, usually expressed as a percentage of the nominal power.
  • Foreseeability or predictability is how accurately the operator can anticipate the generation: for example tidal power varies with the tides but is completely foreseeable because the orbit of the moon can be predicted exactly, and improved weather forecasts can make wind power more predictable.

Sources

Dammed hydroelectricity, biomass and geothermal are dispatchable as each has a store of potential energy; wind and solar without storage can be decreased, but not dispatched, other than when nature provides. Between wind and solar, solar has a more variable daily cycle than wind, but is more predictable in daylight hours than wind. Like solar, tidal energy varies between on and off cycles through each day, unlike solar there is no intermittency, tides are available every day without fail.

Wind power

Day ahead prediction and actual wind power

Grid operators use day ahead forecasting to determine which of the available power sources to use the next day, and weather forecasting is used to predict the likely wind power and solar power output available. Although wind power forecasts have been used operationally for decades, as of 2019 the IEA is organizing international collaboration to further improve their accuracy.

Erie Shores Wind Farm monthly output over a two-year period
 

Wind-generated power is a variable resource, and the amount of electricity produced at any given point in time by a given plant will depend on wind speeds, air density, and turbine characteristics (among other factors). If wind speed is too low then the wind turbines will not be able to make electricity, and if it is too high the turbines will have to be shut down to avoid damage. While the output from a single turbine can vary greatly and rapidly as local wind speeds vary, as more turbines are connected over larger and larger areas the average power output becomes less variable.

  • Intermittence: Regions smaller than synoptic scale (less than about 1000 km long, the size of an average country) have mostly the same weather and thus around the same wind power, unless local conditions favor special winds. Some studies show that wind farms spread over a geographically diverse area will as a whole rarely stop producing power altogether. However this is rarely the case for smaller areas with uniform geography such as Ireland, Scotland and Denmark which have several days per year with little wind power.
  • Capacity factor: Wind power typically has an annual capacity factor of 25–50%, with offshore wind outperforming onshore wind.
  • Dispatchability: Because wind power is not by itself dispatchable wind farms are sometimes built with storage.
  • Capacity credit: At low levels of penetration, the capacity credit of wind is about the same as the capacity factor. As the concentration of wind power on the grid rises, the capacity credit percentage drops.
  • Variability: Site dependent. Sea breezes are much more constant than land breezes. Seasonal variability may reduce output by 50%.
  • Reliability: A wind farm has high technical reliability when the wind blows. That is, the output at any given time will only vary gradually due to falling wind speeds or storms (the latter necessitating shut downs). A typical wind farm is unlikely to have to shut down in less than half an hour at the extreme, whereas an equivalent-sized power station can fail totally instantaneously and without warning. The total shutdown of wind turbines is predictable via weather forecasting. The average availability of a wind turbine is 98%, and when a turbine fails or is shut down for maintenance it only affects a small percentage of the output of a large wind farm.
  • Predictability: Although wind is variable, it is also predictable in the short term. There is an 80% chance that wind output will change less than 10% in an hour and a 40% chance that it will change 10% or more in 5 hours.

Because wind power is generated by large numbers of small generators, individual failures do not have large impacts on power grids. This feature of wind has been referred to as resiliency.

Solar power

Daily solar output at AT&T Park in San Francisco
 
Seasonal variation of the output of the solar panels at AT&T park in San Francisco

Intermittency inherently affects solar energy, as the production of renewable electricity from solar sources depends on the amount of sunlight at a given place and time. Solar output varies throughout the day and through the seasons, and is affected by dust, fog, cloud cover, frost or snow. Many of the seasonal factors are fairly predictable, and some solar thermal systems make use of heat storage to produce grid power for a full day.

  • Variability: In the absence of an energy storage system, solar does not produce power at night, little in bad weather and varies between seasons. In many countries, solar produces most energy in seasons with low wind availability and vice versa.
  • Capacity factor Standard photovoltaic solar has an annual average capacity factor of 10-20%, but panels that move and track the sun have a capacity factor up to 30%. Thermal solar parabolic trough with storage 56%. Thermal solar power tower with storage 73%.

The impact of intermittency of solar-generated electricity will depend on the correlation of generation with demand. For example, solar thermal power plants such as Nevada Solar One are somewhat matched to summer peak loads in areas with significant cooling demands, such as the south-western United States. Thermal energy storage systems like the small Spanish Gemasolar Thermosolar Plant can improve the match between solar supply and local consumption. The improved capacity factor using thermal storage represents a decrease in maximum capacity, and extends the total time the system generates power.

Run-of-the-river hydroelectricity

In many countries new large dams are no longer being built, because of the environmental impact of reservoirs. Run of the river projects have continued to be built. The absence of a reservoir results in both seasonal and annual variations in electricity generated.

Tidal power

Types of tide

Tidal power is the most predictable of all the variable renewable energy sources. The tides reverse twice a day, but they are never intermittent, on the contrary they are completely reliable. Only 20 sites in the world have yet been identified as possible tidal power stations.

Wave power

Waves are primarily created by wind, so the power available from waves tends to follow that available from wind, but due to the mass of the water is less variable than wind power. Wind power is proportional to the cube of the wind speed, while wave power is proportional to the square of the wave height.

Solutions for their integration

The displaced dispatchable generation could be coal, natural gas, biomass, nuclear, geothermal or storage hydro. Rather than starting and stopping nuclear or geothermal it is cheaper to use them as constant base load power. Any power generated in excess of demand can displace heating fuels, be converted to storage or sold to another grid. Biofuels and conventional hydro can be saved for later when intermittents are not generating power. Alternatives to burning coal and natural gas which produce fewer greenhouse gases may eventually make fossil fuels a stranded asset that is left in the ground. Highly integrated grids favor flexibility and performance over cost, resulting in more plants that operate for fewer hours and lower capacity factors.

All sources of electrical power have some degree of variability, as do demand patterns which routinely drive large swings in the amount of electricity that suppliers feed into the grid. Wherever possible, grid operations procedure are designed to match supply with demand at high levels of reliability, and the tools to influence supply and demand are well-developed. The introduction of large amounts of highly variable power generation may require changes to existing procedures and additional investments.

The capacity of a reliable renewable power supply, can be fulfilled by the use of backup or extra infrastructure and technology, using mixed renewables to produce electricity above the intermittent average, which may be used to meet regular and unanticipated supply demands. Additionally, the storage of energy to fill the shortfall intermittency or for emergencies can be part of a reliable power supply.

In practice, as the power output from wind varies, partially loaded conventional plants, which are already present to provide response and reserve, adjust their output to compensate. While low penetrations of intermittent power may use existing levels of response and spinning reserve, the larger overall variations at higher penetrations levels will require additional reserves or other means of compensation.

Operational reserve

All managed grids already have existing operational and "spinning" reserve to compensate for existing uncertainties in the power grid. The addition of intermittent resources such as wind does not require 100% "back-up" because operating reserves and balancing requirements are calculated on a system-wide basis, and not dedicated to a specific generating plant.

Some gas, or hydro power plants are partially loaded and then controlled to change as demand changes or to replace rapidly lost generation. The ability to change as demand changes is termed "response". The ability to quickly replace lost generation, typically within timescales of 30 seconds to 30 minutes, is termed "spinning reserve".

Generally thermal plants running as peaking plants will be less efficient than if they were running as base load. Hydroelectric facilities with storage capacity (such as the traditional dam configuration) may be operated as base load or peaking plants.

Grids can contract for grid battery plants, which provide immediately available power for an hour or so, which gives time for other generators to be started up in the event of a failure, and greatly reduces the amount of spinning reserve required.

Demand response

Demand response is a change in consumption of energy to better align with supply. It can take the form of switching off loads, or absorb additional energy to correct supply/demand imbalances. Incentives have been widely created in the American, British and French systems for the use of these systems, such as favorable rates or capital cost assistance, encouraging consumers with large loads to take them offline whenever there is a shortage of capacity, or conversely to increase load when there is a surplus.

Certain types of load control allow the power company to turn loads off remotely if insufficient power is available. In France large users such as CERN cut power usage as required by the System Operator - EDF under the encouragement of the EJP tariff.

Energy demand management refers to incentives to adjust use of electricity, such as higher rates during peak hours. Real-time variable electricity pricing can encourage users to adjust usage to take advantage of periods when power is cheaply available and avoid periods when it is more scarce and expensive. Some loads such as desalination plants, electric boilers and industrial refrigeration units, are able to store their output (water and heat). Several papers also concluded that Bitcoin mining loads would reduce curtailment, hedge electricity price risk, stabilize the grid, increase the profitability of renewable energy power stations and therefore accelerate transition to sustainable energy. But others argue that Bitcoin mining can never be sustainable.

Instantaneous demand reduction. Most large systems also have a category of loads which instantly disconnect when there is a generation shortage, under some mutually beneficial contract. This can give instant load reductions (or increases).

Storage

Construction of the Salt Tanks which provide efficient thermal energy storage so that output can be provided after the sun goes down, and output can be scheduled to meet demand requirements. The 280 MW Solana Generating Station is designed to provide six hours of energy storage. This allows the plant to generate about 38 percent of its rated capacity over the course of a year.
 
Learning curve of lithium-ion batteries: the price of batteries declined by 97% in three decades.

At times of low load where non-dispatchable output from wind and solar may be high, grid stability requires lowering the output of various dispatchable generating sources or even increasing controllable loads, possibly by using energy storage to time-shift output to times of higher demand. Such mechanisms can include:

Pumped storage hydropower is the most prevalent existing technology used, and can substantially improve the economics of wind power. The availability of hydropower sites suitable for storage will vary from grid to grid. Typical round trip efficiency is 80%.

Traditional lithium-ion is the most common type used for grid-scale battery storage as of 2020. Rechargeable flow batteries can serve as a large capacity, rapid-response storage medium. Hydrogen can be created through electrolysis and stored for later use.

Flywheel energy storage systems have some advantages over chemical batteries. Along with substantial durability which allows them to be cycled frequently without noticeable life reduction, they also have very fast response and ramp rates. They can go from full discharge to full charge within a few seconds. They can be manufactured using non-toxic and environmentally friendly materials, easily recyclable once the service life is over.

Thermal energy storage stores heat. Stored heat can be used directly for heating needs or converted into electricity. In the context of a CHP plant a heat storage can serve as a functional electricity storage at comparably low costs. Ice storage air conditioning Ice can be stored inter seasonally and can be used as a source of air-conditioning during periods of high demand. Present systems only need to store ice for a few hours but are well developed.

Storage of electrical energy results in some lost energy because storage and retrieval are not perfectly efficient. Storage also requires capital investment and space for storage facilities.

Geographic diversity and complementing technologies

Five days of hourly output of five wind farms in Ontario

The variability of production from a single wind turbine can be high. Combining any additional number of turbines (for example, in a wind farm) results in lower statistical variation, as long as the correlation between the output of each turbine is imperfect, and the correlations are always imperfect due to the distance between each turbine. Similarly, geographically distant wind turbines or wind farms have lower correlations, reducing overall variability. Since wind power is dependent on weather systems, there is a limit to the benefit of this geographic diversity for any power system.

Multiple wind farms spread over a wide geographic area and gridded together produce power more constantly and with less variability than smaller installations. Wind output can be predicted with some degree of confidence using weather forecasts, especially from large numbers of turbines/farms. The ability to predict wind output is expected to increase over time as data is collected, especially from newer facilities.

Electricity produced from solar energy tends to counterbalance the fluctuating supplies generated from wind. Normally it is windiest at night and during cloudy or stormy weather, and there is more sunshine on clear days with less wind. Besides, wind energy has often a peak in the winter season, whereas solar energy has a peak in the summer season; the combination of wind and solar reduces the need for dispatchable backup power.

  • In some locations, electricity demand may have a high correlation with wind output,particularly in locations where cold temperatures drive electric consumption (as cold air is denser and carries more energy).
  • The allowable penetration may be increased with further investment in standby generation. For instance some days could produce 80% intermittent wind and on the many windless days substitute 80% dispatchable power like natural gas, biomass and Hydro.
  • Areas with existing high levels of hydroelectric generation may ramp up or down to incorporate substantial amounts of wind. Norway, Brazil, and Manitoba all have high levels of hydroelectric generation, Quebec produces over 90% of its electricity from hydropower, and Hydro-Québec is the largest hydropower producer in the world. The U.S. Pacific Northwest has been identified as another region where wind energy is complemented well by existing hydropower. Storage capacity in hydropower facilities will be limited by size of reservoir, and environmental and other considerations.

Connecting grid internationally

It is often feasible to export energy to neighboring grids at times of surplus, and import energy when needed. This practice is common in Europe and between the US and Canada. Integration with other grids can lower the effective concentration of variable power: for instance, Denmark's high penetration of VRE, in the context of the German/Dutch/Scandinavian grids with which it has interconnections, is considerably lower as a proportion of the total system. Hydroelectricity that compensates for variability can be used across countries.

The capacity of power transmission infrastructure may have to be substantially upgraded to support export/import plans. Some energy is lost in transmission. The economic value of exporting variable power depends in part on the ability of the exporting grid to provide the importing grid with useful power at useful times for an attractive price.

Sector coupling

Demand and generation can be better matched when sectors such as mobility, heat and gas are coupled with the power system. The electric vehicle market is for instance expected to become the largest source of storage capacity. This may be a more expensive option appropriate for high penetration of variable renewables, compared to other sources of flexibility. The International Energy Agency says that sector coupling is needed to compensate for the mismatch between seasonal demand and supply.

Electric vehicles can be charged during periods of low demand and high production, and in some places send power back from the vehicle-to-grid.

Penetration

Penetration refers to the proportion of a primary energy (PE) source in an electric power system, expressed as a percentage. There are several methods of calculation yielding different penetrations. The penetration can be calculated either as:

  1. the nominal capacity (installed power) of a PE source divided by the peak load within an electric power system; or
  2. the nominal capacity (installed power) of a PE source divided by the total capacity of the electric power system; or
  3. the electrical energy generated by a PE source in a given period, divided by the demand of the electric power system in this period.

The level of penetration of intermittent variable sources is significant for the following reasons:

  • Power grids with significant amounts of dispatchable pumped storage, hydropower with reservoir or pondage or other peaking power plants such as natural gas-fired power plants are capable of accommodating fluctuations from intermittent power more easily.
  • Relatively small electric power systems without strong interconnection (such as remote islands) may retain some existing diesel generators but consuming less fuel, for flexibility until cleaner energy sources or storage such as pumped hydro or batteries become cost-effective.

In the early 2020s wind and solar produce 10% of the world's electricity, but supply in the 40-55% penetration range has already been implemented in several systems, with over 65% planned for the UK by 2030.

There is no generally accepted maximum level of penetration, as each system's capacity to compensate for intermittency differs, and the systems themselves will change over time. Discussion of acceptable or unacceptable penetration figures should be treated and used with caution, as the relevance or significance will be highly dependent on local factors, grid structure and management, and existing generation capacity.

For most systems worldwide, existing penetration levels are significantly lower than practical or theoretical maximums.

Maximum penetration limits

Maximum penetration of combined wind and solar is estimated at around 70% to 90% without regional aggregation, demand management or storage; and up to 94% with 12 hours of storage. Economic efficiency and cost considerations are more likely to dominate as critical factors; technical solutions may allow higher penetration levels to be considered in future, particularly if cost considerations are secondary.

Economic impacts of variability

Estimates of the cost of wind and solar energy may include estimates of the "external" costs of wind and solar variability, or be limited to the cost of production. All electrical plant has costs that are separate from the cost of production, including, for example, the cost of any necessary transmission capacity or reserve capacity in case of loss of generating capacity. Many types of generation, particularly fossil fuel derived, will also have cost externalities such as pollution, greenhouse gas emission, and habitat destruction which are generally not directly accounted for. The magnitude of the economic impacts is debated and will vary by location, but is expected to rise with higher penetration levels. At low penetration levels, costs such as operating reserve and balancing costs are believed to be insignificant.

Intermittency may introduce additional costs that are distinct from or of a different magnitude than for traditional generation types. These may include:

  • Transmission capacity: transmission capacity may be more expensive than for nuclear and coal generating capacity due to lower load factors. Transmission capacity will generally be sized to projected peak output, but average capacity for wind will be significantly lower, raising cost per unit of energy actually transmitted. However transmission costs are a low fraction of total energy costs.
  • Additional operating reserve: if additional wind and solar does not correspond to demand patterns, additional operating reserve may be required compared to other generating types, however this does not result in higher capital costs for additional plants since this is merely existing plants running at low output - spinning reserve. Contrary to statements that all wind must be backed by an equal amount of "back-up capacity", intermittent generators contribute to base capacity "as long as there is some probability of output during peak periods". Back-up capacity is not attributed to individual generators, as back-up or operating reserve "only have meaning at the system level".
  • Balancing costs: to maintain grid stability, some additional costs may be incurred for balancing of load with demand. Although improvements to grid balancing can be costly, they can lead to long term savings.

In many countries for many types of variable renewable energy, from time to time the government invites companies to tender sealed bids to construct a certain capacity of solar power to connect to certain electricity substations. By accepting the lowest bid the government commits to buy at that price per kWh for a fixed number of years, or up to a certain total amount of power. This provides certainty for investors against highly volatile wholesale electricity prices. However they may still risk exchange rate volatility if they borrowed in foreign currency.

Regulation and grid planning

Britain

The operator of the British electricity system has said that it will be capable of operating zero-carbon by 2025, whenever there is enough renewable generation, and may be carbon negative by 2033. The company, National Grid Electricity System Operator, states that new products and services will help reduce the overall cost of operating the system.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...