Search This Blog

Saturday, September 2, 2023

Electromagnetic compatibility

Anechoic RF chamber used for EMC testing (radiated emissions and immunity). The furniture has to be made of wood or plastic, not metal.
Log-periodic antenna measurement for outdoors

Electromagnetic compatibility (EMC) is the ability of electrical equipment and systems to function acceptably in their electromagnetic environment, by limiting the unintentional generation, propagation and reception of electromagnetic energy which may cause unwanted effects such as electromagnetic interference (EMI) or even physical damage to operational equipment. The goal of EMC is the correct operation of different equipment in a common electromagnetic environment. It is also the name given to the associated branch of electrical engineering.

EMC pursues three main classes of issue. Emission is the generation of electromagnetic energy, whether deliberate or accidental, by some source and its release into the environment. EMC studies the unwanted emissions and the countermeasures which may be taken in order to reduce unwanted emissions. The second class, susceptibility, is the tendency of electrical equipment, referred to as the victim, to malfunction or break down in the presence of unwanted emissions, which are known as Radio frequency interference (RFI). Immunity is the opposite of susceptibility, being the ability of equipment to function correctly in the presence of RFI, with the discipline of "hardening" equipment being known equally as susceptibility or immunity. A third class studied is coupling, which is the mechanism by which emitted interference reaches the victim.

Interference mitigation and hence electromagnetic compatibility may be achieved by addressing any or all of these issues, i.e., quieting the sources of interference, inhibiting coupling paths and/or hardening the potential victims. In practice, many of the engineering techniques used, such as grounding and shielding, apply to all three issues.

History

Origins

The earliest EMC issue was lightning strike (lightning electromagnetic pulse, or LEMP) on ships and buildings. Lightning rods or lightning conductors began to appear in the mid-18th century. With the advent of widespread electricity generation and power supply lines from the late 19th century on, problems also arose with equipment short-circuit failure affecting the power supply, and with local fire and shock hazard when the power line was struck by lightning. Power stations were provided with output circuit breakers. Buildings and appliances would soon be provided with input fuses, and later in the 20th century miniature circuit breakers (MCB) would come into use.

Early twentieth century

It may be said that radio interference and its correction arose with the first spark-gap experiment of Marconi in the late 1800s. As radio communications developed in the first half of the 20th century, interference between broadcast radio signals began to occur and an international regulatory framework was set up to ensure interference-free communications.

Switching devices became commonplace through the middle of the 20th century, typically in petrol powered cars and motorcycles but also in domestic appliances such as thermostats and refrigerators. This caused transient interference with domestic radio and (after World War II) TV reception, and in due course laws were passed requiring the suppression of such interference sources.

ESD problems first arose with accidental electric spark discharges in hazardous environments such as coal mines and when refuelling aircraft or motor cars. Safe working practices had to be developed.

Postwar period

After World War II the military became increasingly concerned with the effects of nuclear electromagnetic pulse (NEMP), lightning strike, and even high-powered radar beams, on vehicle and mobile equipment of all kinds, and especially aircraft electrical systems.

When high RF emission levels from other sources became a potential problem (such as with the advent of microwave ovens), certain frequency bands were designated for Industrial, Scientific and Medical (ISM) use, allowing emission levels limited only by thermal safety standards. Later, the International Telecommunication Union adopted a Recommendation providing limits of radiation from ISM devices in order to protect radiocommunications. A variety of issues such as sideband and harmonic emissions, broadband sources, and the ever-increasing popularity of electrical switching devices and their victims, resulted in a steady development of standards and laws.

From the late 1970s, the popularity of modern digital circuitry rapidly grew. As the technology developed, with ever-faster switching speeds (increasing emissions) and lower circuit voltages (increasing susceptibility), EMC increasingly became a source of concern. Many more nations became aware of EMC as a growing problem and issued directives to the manufacturers of digital electronic equipment, which set out the essential manufacturer requirements before their equipment could be marketed or sold. Organizations in individual nations, across Europe and worldwide, were set up to maintain these directives and associated standards. In 1979, the American FCC published a regulation that required the electromagnetic emissions of all "digital devices" to be below certain limits. This regulatory environment led to a sharp growth in the EMC industry supplying specialist devices and equipment, analysis and design software, and testing and certification services. Low-voltage digital circuits, especially CMOS transistors, became more susceptible to ESD damage as they were miniaturised and, despite the development of on-chip hardening techniques, a new ESD regulatory regime had to be developed.

Modern era

From the 1980s on the explosive growth in mobile communications and broadcast media channels put huge pressure on the available airspace. Regulatory authorities began squeezing band allocations closer and closer together, relying on increasingly sophisticated EMC control methods, especially in the digital communications domain, to keep cross-channel interference to acceptable levels. Digital systems are inherently less susceptible than analogue systems, and also offer far easier ways (such as software) to implement highly sophisticated protection and error-correction measures.

In 1985, the USA released the ISM bands for low-power mobile digital communications, leading to the development of Wi-Fi and remotely-operated car door keys. This approach relies on the intermittent nature of ISM interference and use of sophisticated error-correction methods to ensure lossless reception during the quiet gaps between any bursts of interference.

Concepts

"Electromagnetic interference" (EMI) is defined as the "degradation in the performance of equipment or transmission channel or a system caused by an electromagnetic disturbance" (IEV 161-01-06) while "electromagnetic disturbance" is defined as "an electromagnetic phenomenon that can degrade the performance of a device, equipment or system, or adversely affect living or inert matter (IEV 161-01-05). The terms "electromagnetic disturbance" and "electromagnetic interference" designate respectively the cause and the effect,

Electromagnetic compatibility (EMC) is an equipment characteristic or property and is defined as " the ability of equipment or a system to function satisfactorily in its electromagnetic environment without introducing intolerable electromagnetic disturbances to anything in that environment " (IEV 161-01-07).

EMC ensures the correct operation, in the same electromagnetic environment, of different equipment items which use or respond to electromagnetic phenomena, and the avoidance of any interference. Another way of saying this is that EMC is the control of EMI so that unwanted effects are prevented.

Besides understanding the phenomena in themselves, EMC also addresses the countermeasures, such as control regimes, design and measurement, which should be taken in order to prevent emissions from causing any adverse effect.

Technical characteristics of interference

Types of interference

EMC is often understood as the control of electromagnetic interference (EMI). Electromagnetic interference divides into several categories according to the source and signal characteristics.

The origin of interference, often called "noise" in this context, can be man-made (artificial) or natural.

Continuous, or continuous wave (CW), interference comprises a given range of frequencies. This type is naturally divided into sub-categories according to frequency range, and as a whole is sometimes referred to as "DC to daylight". One common classification is into narrowband and broadband, according to the spread of the frequency range.

An electromagnetic pulse (EMP), sometimes called a transient disturbance, is a short-duration pulse of energy. This energy is usually broadband by nature, although it often excites a relatively narrow-band damped sine wave response in the victim. Pulse signals divide broadly into isolated and repetitive events.

Coupling mechanisms

The four EMI coupling modes

When a source emits interference, it follows a route to the victim known as the coupling path. There are four basic coupling mechanisms: conductive, capacitive, magnetic or inductive, and radiative. Any coupling path can be broken down into one or more of these coupling mechanisms working together.

Conductive coupling occurs when the coupling path between the source and victim is formed by direct electrical contact with a conducting body.

Capacitive coupling occurs when a varying electrical field exists between two adjacent conductors, inducing a change in voltage on the receiving conductor.

Inductive coupling or magnetic coupling occurs when a varying magnetic field exists between two parallel conductors, inducing a change in voltage along the receiving conductor.

Radiative coupling or electromagnetic coupling occurs when source and victim are separated by a large distance. Source and victim act as radio antennas: the source emits or radiates an electromagnetic wave which propagates across the space in between and is picked up or received by the victim.

Control

The damaging effects of electromagnetic interference pose unacceptable risks in many areas of technology, and it is necessary to control such interference and reduce the risks to acceptable levels.

The control of electromagnetic interference (EMI) and assurance of EMC comprises a series of related disciplines:

  • Characterising the threat.
  • Setting standards for emission and susceptibility levels.
  • Design for standards compliance.
  • Testing for standards compliance.

The risk posed by the threat is usually statistical in nature, so much of the work in threat characterisation and standards setting is based on reducing the probability of disruptive EMI to an acceptable level, rather than its assured elimination.

For a complex or novel piece of equipment, this may require the production of a dedicated EMC control plan summarizing the application of the above and specifying additional documents required.

Characterisation of the problem requires understanding of:

  • The interference source and signal.
  • The coupling path to the victim.
  • The nature of the victim both electrically and in terms of the significance of malfunction.

Design

A TV tuner card showing many small bypass capacitors and three metal shields: the PCI bracket, the metal box with two coax inputs, and the shield for the S-Video connector

Breaking a coupling path is equally effective at either the start or the end of the path, therefore many aspects of good EMC design practice apply equally to potential sources and to potential victims. A design which easily couples energy to the outside world will equally easily couple energy in and will be susceptible. A single improvement will often reduce both emissions and susceptibility. Grounding and shielding aim to reduce emissions or divert EMI away from the victim by providing an alternative, low-impedance path. Techniques include:

  • Grounding or earthing schemes such as star earthing for audio equipment or ground planes for RF. The scheme must also satisfy safety regulations.
  • Shielded cables, where the signal wires are surrounded by an outer conductive layer that is grounded at one or both ends.
  • Shielded housings. A conductive metal housing will act as an interference shield. In order to access the interior, such a housing is typically made in sections (such as a box and lid); an RF gasket may be used at the joints to reduce the amount of interference that leaks through. RF gaskets come in various types. A plain metal gasket may be either braided wire or a flat strip slotted to create many springy "fingers". Where a waterproof seal is required, a flexible elastomeric base may be impregnated with chopped metal fibers dispersed into the interior or long metal fibers covering the surface or both.

Other general measures include:

  • Decoupling or filtering at critical points such as cable entries and high-speed switches, using RF chokes and/or RC elements. A line filter implements these measures between a device and a line.
  • Transmission line techniques for cables and wiring, such as balanced differential signal and return paths, and impedance matching.
  • Avoidance of antenna structures such as loops of circulating current, resonant mechanical structures, unbalanced cable impedances or poorly grounded shielding.
  • Eliminating spurious rectifying junctions that can form between metal structures around and near transmitter installations. Such junctions in combination with unintentional antenna structures can radiate harmonics of the transmitter frequency.
Spread spectrum method reduces EMC peaks. Frequency spectrum of the heating up period of a switching power supply which uses the spread spectrum method incl. waterfall diagram over a few minutes

Additional measures to reduce emissions include:

  • Avoid unnecessary switching operations. Necessary switching should be done as slowly as is technically possible.
  • Noisy circuits (e. g. with a lot of switching activity) should be physically separated from the rest of the design.
  • High peaks at single frequencies can be avoided by using the spread spectrum method, in which different parts of the circuit emit at different frequencies.
  • Harmonic wave filters.
  • Design for operation at lower signal levels, reducing the energy available for emission.

Additional measures to reduce susceptibility include:

  • Fuses, trip switches and circuit breakers.
  • Transient absorbers.
  • Design for operation at higher signal levels, reducing the relative noise level in comparison.
  • Error-correction techniques in digital circuitry. These may be implemented in hardware, software or a combination of both.
  • Differential signaling or other common-mode noise techniques for signal routing

Testing

Testing is required to confirm that a particular device meets the required standards. It divides broadly into emissions testing and susceptibility testing. Open-area test sites, or OATS, are the reference sites in most standards. They are especially useful for emissions testing of large equipment systems. However RF testing of a physical prototype is most often carried out indoors, in a specialised EMC test chamber. Types of chamber include anechoic, reverberation and the gigahertz transverse electromagnetic cell (GTEM cell). Sometimes computational electromagnetics simulations are used to test virtual models. Like all compliance testing, it is important that the test equipment, including the test chamber or site and any software used, be properly calibrated and maintained. Typically, a given run of tests for a particular piece of equipment will require an EMC test plan and follow-up test report. The full test program may require the production of several such documents.

Emissions are typically measured for radiated field strength and where appropriate for conducted emissions along cables and wiring. Inductive (magnetic) and capacitive (electric) field strengths are near-field effects, and are only important if the device under test (DUT) is designed for location close to other electrical equipment. For conducted emissions, typical transducers include the LISN (line impedance stabilisation network) or AMN (artificial mains network) and the RF current clamp. For radiated emission measurement, antennas are used as transducers. Typical antennas specified include dipole, biconical, log-periodic, double ridged guide and conical log-spiral designs. Radiated emissions must be measured in all directions around the DUT. Specialized EMI test receivers or EMI analysers are used for EMC compliance testing. These incorporate bandwidths and detectors as specified by international EMC standards. An EMI receiver may be based on a spectrum analyser to measure the emission levels of the DUT across a wide band of frequencies (frequency domain), or on a tunable narrower-band device which is swept through the desired frequency range. EMI receivers along with specified transducers can often be used for both conducted and radiated emissions. Pre-selector filters may also be used to reduce the effect of strong out-of-band signals on the front-end of the receiver. Some pulse emissions are more usefully characterized using an oscilloscope to capture the pulse waveform in the time domain.

Radiated field susceptibility testing typically involves a high-powered source of RF or EM energy and a radiating antenna to direct the energy at the potential victim or device under test (DUT). Conducted voltage and current susceptibility testing typically involves a high-powered signal generator, and a current clamp or other type of transformer to inject the test signal. Transient or EMP signals are used to test the immunity of the DUT against powerline disturbances including surges, lightning strikes and switching noise. In motor vehicles, similar tests are performed on battery and signal lines. The transient pulse may be generated digitally and passed through a broadband pulse amplifier, or applied directly to the transducer from a specialised pulse generator. Electrostatic discharge testing is typically performed with a piezo spark generator called an "ESD pistol". Higher energy pulses, such as lightning or nuclear EMP simulations, can require a large current clamp or a large antenna which completely surrounds the DUT. Some antennas are so large that they are located outdoors, and care must be taken not to cause an EMP hazard to the surrounding environment.

Legislation

Several organizations, both national and international, work to promote international co-operation on standardization (harmonization), including publishing various EMC standards. Where possible, a standard developed by one organization may be adopted with little or no change by others. This helps for example to harmonize national standards across Europe.

International standards organizations include:

Among the main national organizations are:

Compliance with national or international standards is usually laid down by laws passed by individual nations. Different nations can require compliance with different standards.

In European law, EU directive 2014/30/EU (previously 2004/108/EC) on EMC defines the rules for the placing on the market/putting into service of electric/electronic equipment within the European Union. The Directive applies to a vast range of equipment including electrical and electronic appliances, systems and installations. Manufacturers of electric and electronic devices are advised to run EMC tests in order to comply with compulsory CE-labeling. More are given in the list of EMC directives. Compliance with the applicable harmonised standards whose reference is listed in the OJEU under the EMC Directive gives presumption of conformity with the corresponding essential requirements of the EMC Directive.

In 2019, the USA adopted a program for the protection of critical infrastructure against an electromagnetic pulse, whether caused by a geomagnetic storm or a high-altitude nuclear weapon.

Kidney disease

From Wikipedia, the free encyclopedia
Kidney disease
Other namesRenal disease, nephropathy
Pathologic kidney specimen showing marked pallor of the cortex, contrasting to the darker areas of surviving medullary tissue. The patient died with acute kidney injury.
SpecialtyNephrology, urology 
ComplicationsUremia, death

Kidney disease, or renal disease, technically referred to as nephropathy, is damage to or disease of a kidney. Nephritis is an inflammatory kidney disease and has several types according to the location of the inflammation. Inflammation can be diagnosed by blood tests. Nephrosis is non-inflammatory kidney disease. Nephritis and nephrosis can give rise to nephritic syndrome and nephrotic syndrome respectively. Kidney disease usually causes a loss of kidney function to some degree and can result in kidney failure, the complete loss of kidney function. Kidney failure is known as the end-stage of kidney disease, where dialysis or a kidney transplant is the only treatment option.

Chronic kidney disease is defined as prolonged kidney abnormalities (functional and/or structural in nature) that last for more than three months. Acute kidney disease is now termed acute kidney injury and is marked by the sudden reduction in kidney function over seven days. In 2007, about one in eight Americans had chronic kidney disease. This rate is increasing over time to where about 1 in 7 Americans are estimated to have CKD as of 2021.

Causes

Deaths due to kidney diseases per million persons in 2012
  16-61
  62-79
  80-88
  89-95
  96-110
  111-120
  121-135
  136-160
  161-186
  187-343

Causes of kidney disease include deposition of the Immunoglobulin A antibodies in the glomerulus, administration of analgesics, xanthine oxidase deficiency, toxicity of chemotherapy agents, and a long-term exposure to lead or its salts. Chronic conditions that can produce nephropathy include systemic lupus erythematosus, diabetes mellitus and high blood pressure (hypertension), which lead to diabetic nephropathy and hypertensive nephropathy, respectively.

Analgesics

One cause of nephropathy is the long term usage of pain medications known as analgesics. The pain medicines which can cause kidney problems include aspirin, acetaminophen, and nonsteroidal anti-inflammatory drugs (NSAIDs). This form of nephropathy is "chronic analgesic nephritis," a chronic inflammatory change characterized by loss and atrophy of tubules and interstitial fibrosis and inflammation (BRS Pathology, 2nd edition).

Specifically, long-term use of the analgesic phenacetin has been linked to renal papillary necrosis (necrotizing papillitis).

Diabetes

Diabetic nephropathy is a progressive kidney disease caused by angiopathy of the capillaries in the glomeruli. It is characterized by nephrotic syndrome and diffuse scarring of the glomeruli. It is particularly associated with poorly managed diabetes mellitus and is a primary reason for dialysis in many developed countries. It is classified as a small blood vessel complication of diabetes.

Autosomal dominant polycystic kidney disease

Gabow 1990 talks about Autosomal Dominant Polycystic Kidney disease and how this disease is genetic. They go on to say "Autosomal dominant polycystic kidney disease (ADPKD) is the most common genetic disease, affecting a half million Americans. The clinical phenotype can result from at least two different gene defects. One gene that can cause ADPKD has been located on the short arm of chromosome 16." The same article also goes on to say that millions of Americans are effected by this disease and is very common.

Long COVID and Kidney Disease

Yende & Parikh 2021 talk about the effects that COVID can have on a person that has a pre-existing health issue regarding kidney diseases. "frailty, chronic diseases, disability and immunodeficiency are at increased risk of kidney disease and progression to kidney failure, and infection with SARS-CoV-2 can further increase this risk" (Long COVID and Kidney Disease, 2021).

Diet

Higher dietary intake of animal protein, animal fat, and cholesterol may increase risk for microalbuminuria, a sign of kidney function decline, and generally, diets higher in fruits, vegetables, and whole grains but lower in meat and sweets may be protective against kidney function decline. This may be because sources of animal protein, animal fat, and cholesterol, and sweets are more acid-producing, while fruits, vegetables, legumes, and whole grains are more base-producing.

IgA nephropathy

IgA nephropathy is the most common glomerulonephritis throughout the world  Primary IgA nephropathy is characterized by deposition of the IgA antibody in the glomerulus. The classic presentation (in 40-50% of the cases) is episodic frank hematuria which usually starts within a day or two of a non-specific upper respiratory tract infection (hence synpharyngitic) as opposed to post-streptococcal glomerulonephritis which occurs some time (weeks) after initial infection. Less commonly gastrointestinal or urinary infection can be the inciting agent. All of these infections have in common the activation of mucosal defenses and hence IgA antibody production.

Iodinated contrast media

Kidney disease induced by iodinated contrast media (ICM) is called CIN (= contrast induced nephropathy) or contrast-induced AKI (= acute kidney injury). Currently, the underlying mechanisms are unclear. But there is a body of evidence that several factors including apoptosis-induction seem to play a role.

Lithium

Lithium, a medication commonly used to treat bipolar disorder and schizoaffective disorders, can cause nephrogenic diabetes insipidus; its long-term use can lead to nephropathy.

Lupus

Despite expensive treatments, lupus nephritis remains a major cause of morbidity and mortality in people with relapsing or refractory lupus nephritis.

Xanthine oxidase deficiency

Another possible cause of Kidney disease is due to decreased function of xanthine oxidase in the purine degradation pathway. Xanthine oxidase will degrade hypoxanthine to xanthine and then to uric acid. Xanthine is not very soluble in water; therefore, an increase in xanthine forms crystals (which can lead to kidney stones) and result in damage to the kidney. Xanthine oxidase inhibitors, like allopurinol, can cause nephropathy.

Polycystic disease of the kidneys

Additional possible cause of nephropathy is due to the formation of cysts or pockets containing fluid within the kidneys. These cysts become enlarged with the progression of aging causing renal failure. Cysts may also form in other organs including the liver, brain, and ovaries. Polycystic Kidney Disease is a genetic disease caused by mutations in the PKD1, PKD2, and PKHD1 genes. This disease affects about half a million people in the US. Polycystic kidneys are susceptible to infections and cancer.

Toxicity of chemotherapy agents

Nephropathy can be associated with some therapies used to treat cancer. The most common form of kidney disease in cancer patients is Acute Kidney Injury (AKI) which can usually be due to volume depletion from vomiting and diarrhea that occur following chemotherapy or occasionally due to kidney toxicities of chemotherapeutic agents. Kidney failure from break down of cancer cells, usually after chemotherapy, is unique to onconephrology. Several chemotherapeutic agents, for example Cisplatin, are associated with acute and chronic kidney injuries. Newer agents such as anti Vascular Endothelial Growth Factor (anti VEGF) are also associated with similar injuries, as well as proteinuria, hypertension and thrombotic microangiopathy.

Diagnosis

The standard diagnostic workup of suspected kidney disease includes a medical history, physical examination, a urine test, and an ultrasound of the kidneys (renal ultrasonography). An ultrasound is essential in the diagnosis and management of kidney disease.

Treatment

Treatment approaches for kidney disease focus on managing the symptoms, controlling the progression, and also treating co-morbidities that a person may have.

Dialysis

Transplantation

Millions of people across the world have kidney disease. Of those millions, several thousand will need dialysis or a kidney transplant at its end-stage. In the United States, as of 2008, 16,500 people needed a kidney transplant. Of those, 5,000 died while waiting for a transplant. Currently, there is a shortage of donors, and in 2007 there were only 64,606 kidney transplants in the world. This shortage of donors is causing countries to place monetary value on kidneys. Countries such as Iran and Singapore are eliminating their lists by paying their citizens to donate. Also, the black market accounts for 5-10 percent of transplants that occur worldwide. The act of buying an organ through the black market is illegal in the United States. To be put on the waiting list for a kidney transplant, patients must first be referred by a physician, then they must choose and contact a donor hospital. Once they choose a donor hospital, patients must then receive an evaluation to make sure they are sustainable to receive a transplant. In order to be a match for a kidney transplant, patients must match blood type and human leukocyte antigen factors with their donors. They must also have no reactions to the antibodies from the donor's kidneys.

Prognosis

Kidney disease can have serious consequences if it cannot be controlled effectively. Generally, the progression of kidney disease is from mild to serious. Some kidney diseases can cause kidney failure.

Carbon steel

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Carbon_steel

Carbon steel is a steel with carbon content from about 0.05 up to 2.1 percent by weight. The definition of carbon steel from the American Iron and Steel Institute (AISI) states:

The term carbon steel may also be used in reference to steel which is not stainless steel; in this use carbon steel may include alloy steels. High carbon steel has many different uses such as milling machines, cutting tools (such as chisels) and high strength wires. These applications require a much finer microstructure, which improves the toughness.

Carbon steel is a popular metal choice for knife-making due to its high amount of carbon, giving the blade more edge retention. To make the most out of this type of steel it is very important to heat treat it properly. If not, the knife may end up being brittle, or too soft to hold an edge.

As the carbon content percentage rises, steel has the ability to become harder and stronger through heat treating; however, it becomes less ductile. Regardless of the heat treatment, a higher carbon content reduces weldability. In carbon steels, the higher carbon content lowers the melting point.

Uses of carbon steel

  • Carbon steel is used to construct buildings, bridges, and other infrastructure projects.
  • It is also used in producing pipes, fittings, and other components for the oil and gas industry.
  • Carbon steel is an essential material in the automotive industry, where it is used to make parts such as engine blocks, transmission components, and suspension parts.
  • It is also utilised in the production of railway tracks and locomotives.

Properties, characteristics & environmental impact

  • Carbon steel is often divided into two main categories: low-carbon steel and high-carbon steel.
  • Carbon steel may also contain other elements, such as manganese, phosphorus, sulfur, and silicon, which can affect its properties.
  • Carbon steel can be easily machined and welded, making it versatile for various applications. It can also be heat treated to improve its strength and durability.
  • Carbon steel is susceptible to rust and corrosion, especially in environments with high moisture levels and/or salt.
  • Carbon steel can be shielded from corrosion by coating it with paint, varnish, or other protective material.
  • Alternatively, it can be made from a stainless steel alloy that contains chromium, which provides excellent corrosion resistance.
  • Carbon steel is sometimes alloyed with other elements to improve its properties, such as adding chromium and/or nickel to improve its resistance to corrosion and oxidation or adding molybdenum to improve its strength and toughness at high temperatures.
  • Carbon steel is an environmentally friendly material, as it is easily recyclable and can be reused in various applications. It is also energy-efficient to produce, as it requires less energy than other metals such as aluminium and copper.

Type

Mild or low-carbon steel

Mild steel (iron containing a small percentage of carbon, strong and tough but not readily tempered), also known as plain-carbon steel and low-carbon steel, is now the most common form of steel because its price is relatively low while it provides material properties that are acceptable for many applications. Mild steel contains approximately 0.05–0.30% carbon making it malleable and ductile. Mild steel has a relatively low tensile strength, but it is cheap and easy to form. Surface hardness can be increased with carburization.

In applications where large cross-sections are used to minimize deflection, failure by yield is not a risk so low-carbon steels are the best choice, for example as structural steel. The density of mild steel is approximately 7.85 g/cm3 (7,850 kg/m3; 0.284 lb/cu in) and the Young's modulus is 200 GPa (29×106 psi).

Low-carbon steels display yield-point runout where the material has two yield points. The first yield point (or upper yield point) is higher than the second and the yield drops dramatically after the upper yield point. If a low-carbon steel is only stressed to some point between the upper and lower yield point then the surface develops Lüder bands. Low-carbon steels contain less carbon than other steels and are easier to cold-form, making them easier to handle. Typical applications of low carbon steel are car parts, pipes, construction, and food cans.

High-tensile steel

High-tensile steels are low-carbon, or steels at the lower end of the medium-carbon range, which have additional alloying ingredients in order to increase their strength, wear properties or specifically tensile strength. These alloying ingredients include chromium, molybdenum, silicon, manganese, nickel, and vanadium. Impurities such as phosphorus and sulfur have their maximum allowable content restricted.

Higher-carbon steels

Carbon steels which can successfully undergo heat-treatment have a carbon content in the range of 0.30–1.70% by weight. Trace impurities of various other elements can significantly affect the quality of the resulting steel. Trace amounts of sulfur in particular make the steel red-short, that is, brittle and crumbly at working temperatures. Low-alloy carbon steel, such as A36 grade, contains about 0.05% sulfur and melt around 1,426–1,538 °C (2,600–2,800 °F). Manganese is often added to improve the hardenability of low-carbon steels. These additions turn the material into a low-alloy steel by some definitions, but AISI's definition of carbon steel allows up to 1.65% manganese by weight. There are two types of higher carbon steels which are high carbon steel and the ultra high carbon steel. The reason for the limited use of high carbon steel is that it has extremely poor ductility and weldability and has a higher cost of production. the applications best suited for the high carbon steels is its use in the spring industry, farm industry, and in the production of wide range of high-strength wires.

AISI classification

Carbon steel is broken down into four classes based on carbon content:

Low-carbon steel

0.05 to 0.15% carbon (plain carbon steel) content.

Medium-carbon steel

Approximately 0.3–0.5% carbon content. Balances ductility and strength and has good wear resistance; used for large parts, forging and automotive components.

High-carbon steel

Approximately 0.6 to 1.0% carbon content. Very strong, used for springs, edged tools, and high-strength wires.

Ultra-high-carbon steel

Approximately 1.25–2.0% carbon content. Steels that can be tempered to great hardness. Used for special purposes like (non-industrial-purpose) knives, axles, and punches. Most steels with more than 2.5% carbon content are made using powder metallurgy.

Heat treatment

Iron-carbon phase diagram, showing the temperature and carbon ranges for certain types of heat treatments

The purpose of heat treating carbon steel is to change the mechanical properties of steel, usually ductility, hardness, yield strength, or impact resistance. Note that the electrical and thermal conductivity are only slightly altered. As with most strengthening techniques for steel, Young's modulus (elasticity) is unaffected. All treatments of steel trade ductility for increased strength and vice versa. Iron has a higher solubility for carbon in the austenite phase; therefore all heat treatments, except spheroidizing and process annealing, start by heating the steel to a temperature at which the austenitic phase can exist. The steel is then quenched (heat drawn out) at a moderate to low rate allowing carbon to diffuse out of the austenite forming iron-carbide (cementite) and leaving ferrite, or at a high rate, trapping the carbon within the iron thus forming martensite. The rate at which the steel is cooled through the eutectoid temperature (about 727 °C or 1,341 °F) affects the rate at which carbon diffuses out of austenite and forms cementite. Generally speaking, cooling swiftly will leave iron carbide finely dispersed and produce a fine grained pearlite and cooling slowly will give a coarser pearlite. Cooling a hypoeutectoid steel (less than 0.77 wt% C) results in a lamellar-pearlitic structure of iron carbide layers with α-ferrite (nearly pure iron) between. If it is hypereutectoid steel (more than 0.77 wt% C) then the structure is full pearlite with small grains (larger than the pearlite lamella) of cementite formed on the grain boundaries. A eutectoid steel (0.77% carbon) will have a pearlite structure throughout the grains with no cementite at the boundaries. The relative amounts of constituents are found using the lever rule. The following is a list of the types of heat treatments possible:

Spheroidizing
Spheroidite forms when carbon steel is heated to approximately 700 °C (1,300 °F) for over 30 hours. Spheroidite can form at lower temperatures but the time needed drastically increases, as this is a diffusion-controlled process. The result is a structure of rods or spheres of cementite within primary structure (ferrite or pearlite, depending on which side of the eutectoid you are on). The purpose is to soften higher carbon steels and allow more formability. This is the softest and most ductile form of steel.
Full annealing
Carbon steel is heated to approximately 400 °C (750 °F) for 1 hour; this ensures all the ferrite transforms into austenite (although cementite might still exist if the carbon content is greater than the eutectoid). The steel must then be cooled slowly, in the realm of 20 °C (36 °F) per hour. Usually it is just furnace cooled, where the furnace is turned off with the steel still inside. This results in a coarse pearlitic structure, which means the "bands" of pearlite are thick. Fully annealed steel is soft and ductile, with no internal stresses, which is often necessary for cost-effective forming. Only spheroidized steel is softer and more ductile.
Process annealing
A process used to relieve stress in a cold-worked carbon steel with less than 0.3% C. The steel is usually heated to 550 to 650 °C (1,000 to 1,200 °F) for 1 hour, but sometimes temperatures as high as 700 °C (1,300 °F). The image above shows the process annealing area.
Isothermal annealing
It is a process in which hypoeutectoid steel is heated above the upper critical temperature. This temperature is maintained for a time and then reduced to below the lower critical temperature and is again maintained. It is then cooled to room temperature. This method eliminates any temperature gradient.
Normalizing
Carbon steel is heated to approximately 550 °C (1,000 °F) for 1 hour; this ensures the steel completely transforms to austenite. The steel is then air-cooled, which is a cooling rate of approximately 38 °C (100 °F) per minute. This results in a fine pearlitic structure, and a more-uniform structure. Normalized steel has a higher strength than annealed steel; it has a relatively high strength and hardness.
Quenching
Carbon steel with at least 0.4 wt% C is heated to normalizing temperatures and then rapidly cooled (quenched) in water, brine, or oil to the critical temperature. The critical temperature is dependent on the carbon content, but as a general rule is lower as the carbon content increases. This results in a martensitic structure; a form of steel that possesses a super-saturated carbon content in a deformed body-centered cubic (BCC) crystalline structure, properly termed body-centered tetragonal (BCT), with much internal stress. Thus quenched steel is extremely hard but brittle, usually too brittle for practical purposes. These internal stresses may cause stress cracks on the surface. Quenched steel is approximately three times harder (four with more carbon) than normalized steel.
Martempering (marquenching)
Martempering is not actually a tempering procedure, hence the term marquenching. It is a form of isothermal heat treatment applied after an initial quench, typically in a molten salt bath, at a temperature just above the "martensite start temperature". At this temperature, residual stresses within the material are relieved and some bainite may be formed from the retained austenite which did not have time to transform into anything else. In industry, this is a process used to control the ductility and hardness of a material. With longer marquenching, the ductility increases with a minimal loss in strength; the steel is held in this solution until the inner and outer temperatures of the part equalize. Then the steel is cooled at a moderate speed to keep the temperature gradient minimal. Not only does this process reduce internal stresses and stress cracks, but it also increases impact resistance.
Tempering
This is the most common heat treatment encountered because the final properties can be precisely determined by the temperature and time of the tempering. Tempering involves reheating quenched steel to a temperature below the eutectoid temperature and then cooling. The elevated temperature allows very small amounts spheroidite to form, which restores ductility but reduces hardness. Actual temperatures and times are carefully chosen for each composition.
Austempering
The austempering process is the same as martempering, except the quench is interrupted and the steel is held in the molten salt bath at temperatures between 205 and 540 °C (400 and 1,000 °F), and then cooled at a moderate rate. The resulting steel, called bainite, produces an acicular microstructure in the steel that has great strength (but less than martensite), greater ductility, higher impact resistance, and less distortion than martensite steel. The disadvantage of austempering is it can be used only on a few sheets of steel, and it requires a special salt bath.

Case hardening

Case hardening processes harden only the exterior of the steel part, creating a hard, wear-resistant skin (the "case") but preserving a tough and ductile interior. Carbon steels are not very hardenable meaning they can not be hardened throughout thick sections. Alloy steels have a better hardenability, so they can be through-hardened and do not require case hardening. This property of carbon steel can be beneficial, because it gives the surface good wear characteristics but leaves the core flexible and shock-absorbing.

Forging temperature of steel

Steel type Maximum forging temperature Burning temperature
(°F) (°C) (°F) (°C)
1.5% carbon 1,920 1,049 2,080 1,140
1.1% carbon 1,980 1,082 2,140 1,171
0.9% carbon 2,050 1,121 2,230 1,221
0.5% carbon 2,280 1,249 2,460 1,349
0.2% carbon 2,410 1,321 2,680 1,471
3.0% nickel steel 2,280 1,249 2,500 1,371
3.0% nickel–chromium steel 2,280 1,249 2,500 1,371
5.0% nickel (case-hardening) steel 2,320 1,271 2,640 1,449
Chromium-vanadium steel 2,280 1,249 2,460 1,349
High-speed steel 2,370 1,299 2,520 1,385
Stainless steel 2,340 1,282 2,520 1,385
Austenitic chromium–nickel steel 2,370 1,299 2,590 1,420
Silico-manganese spring steel 2,280 1,249 2,460 1,350

Holocene

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Holocene

Holocene
0.0117 – 0 Ma
Chronology
Etymology
Name formalityFormal
Usage information
Celestial bodyEarth
Regional usageGlobal (ICS)
Time scale(s) usedICS Time Scale
Definition
Chronological unitEpoch
Stratigraphic unitSeries
Time span formalityFormal
Lower boundary definitionEnd of the Younger Dryas stadial.
Lower boundary GSSPNGRIP2 ice core, Greenland
75.1000°N 42.3200°W
Lower GSSP ratified2008
Upper boundary definitionPresent day
Upper boundary GSSPN/A
N/A
Upper GSSP ratifiedN/A

The Holocene (/ˈhɒl.əsn, --, ˈh.lə-, -l-/) is the current geological epoch. It began approximately 11,700 years before 2000 CE[a] (11,650 cal years BP, 9700 BCE or 300 HE). It follows the Last Glacial Period, which concluded with the Holocene glacial retreat. The Holocene and the preceding Pleistocene together form the Quaternary period. The Holocene has been identified with the current warm period, known as MIS 1. It is considered by some to be an interglacial period within the Pleistocene Epoch, called the Flandrian interglacial.

The Holocene corresponds with the rapid proliferation, growth and impacts of the human species worldwide, including all of its written history, technological revolutions, development of major civilizations, and overall significant transition towards urban living in the present. The human impact on modern-era Earth and its ecosystems may be considered of global significance for the future evolution of living species, including approximately synchronous lithospheric evidence, or more recently hydrospheric and atmospheric evidence of the human impact. In July 2018, the International Union of Geological Sciences split the Holocene Epoch into three distinct ages based on the climate, Greenlandian (11,700 years ago to 8,200 years ago), Northgrippian (8,200 years ago to 4,200 years ago) and Meghalayan (4,200 years ago to the present), as proposed by International Commission on Stratigraphy. The oldest age, the Greenlandian was characterized by a warming following the preceding ice age. The Northgrippian Age is known for vast cooling due to a disruption in ocean circulations that was caused by the melting of glaciers. The most recent age of the Holocene is the present Meghalayan, which began with extreme drought that lasted around 200 years.

Etymology

The word Holocene was formed from two Ancient Greek words. Holos (ὅλος) is the Greek word for "whole". "Cene" comes from the Greek word kainos (καινός), meaning "new". The concept is that this epoch is "entirely new". The suffix '-cene' is used for all the seven epochs of the Cenozoic Era.

Overview

The International Commission on Stratigraphy has defined the Holocene as starting approximately 11,700 years before 2000 CE (11,650 cal years BP, or 9,700 BCE). The Subcommission on Quaternary Stratigraphy (SQS) regards the term 'recent' as an incorrect way of referring to the Holocene, preferring the term 'modern' instead to describe current processes. It also observes that the term 'Flandrian' may be used as a synonym for Holocene, although it is becoming outdated. The International Commission on Stratigraphy, however, considers the Holocene an epoch following the Pleistocene and specifically following the last glacial period. Local names for the last glacial period include the Wisconsinan in North America, the Weichselian in Europe, the Devensian in Britain, the Llanquihue in Chile and the Otiran in New Zealand."

The Holocene can be subdivided into five time intervals, or chronozones, based on climatic fluctuations:

Note: "ka BP" means "kilo-annum Before Present", i.e. 1,000 years before 1950 (non-calibrated C14 dates)

Geologists working in different regions are studying sea levels, peat bogs and ice-core samples, using a variety of methods, with a view toward further verifying and refining the Blytt–Sernander sequence. This is a classification of climatic periods initially defined by plant remains in peat mosses. Though the method was once thought to be of little interest, based on 14C dating of peats that was inconsistent with the claimed chronozones, investigators have found a general correspondence across Eurasia and North America. The scheme was defined for Northern Europe, but the climate changes were claimed to occur more widely. The periods of the scheme include a few of the final pre-Holocene oscillations of the last glacial period and then classify climates of more recent prehistory.

Paleontologists have not defined any faunal stages for the Holocene. If subdivision is necessary, periods of human technological development, such as the Mesolithic, Neolithic, and Bronze Age, are usually used. However, the time periods referenced by these terms vary with the emergence of those technologies in different parts of the world.

According to some scholars, a third epoch of the Quaternary, the Anthropocene, has now begun. This term is used to denote the present time-interval in which many geologically significant conditions and processes have been profoundly altered by human activities. The 'Anthropocene' (a term coined by Paul J. Crutzen and Eugene Stoermer in 2000) is not a formally defined geological unit. The Subcommission on Quaternary Stratigraphy of the International Commission on Stratigraphy has a working group to determine whether it should be. In May 2019, members of the working group voted in favour of recognizing the Anthropocene as formal chrono-stratigraphic unit, with stratigraphic signals around the mid-twentieth century CE as its base. The exact criteria have still to be determined, after which the recommendation also has to be approved by the working group's parent bodies (ultimately the International Union of Geological Sciences).

Geology

The Holocene is a geologic epoch that follows directly after the Pleistocene. Continental motions due to plate tectonics are less than a kilometre over a span of only 10,000 years. However, ice melt caused world sea levels to rise about 35 m (115 ft) in the early part of the Holocene and another 30 m in the later part of the Holocene. In addition, many areas above about 40 degrees north latitude had been depressed by the weight of the Pleistocene glaciers and rose as much as 180 m (590 ft) due to post-glacial rebound over the late Pleistocene and Holocene, and are still rising today.

The sea-level rise and temporary land depression allowed temporary marine incursions into areas that are now far from the sea. For example, marine fossils from the Holocene epoch have been found in locations such as Vermont and Michigan. Other than higher-latitude temporary marine incursions associated with glacial depression, Holocene fossils are found primarily in lakebed, floodplain, and cave deposits. Holocene marine deposits along low-latitude coastlines are rare because the rise in sea levels during the period exceeds any likely tectonic uplift of non-glacial origin.

Post-glacial rebound in the Scandinavia region resulted in a shrinking Baltic Sea. The region continues to rise, still causing weak earthquakes across Northern Europe. An equivalent event in North America was the rebound of Hudson Bay, as it shrank from its larger, immediate post-glacial Tyrrell Sea phase, to its present boundaries.

Climate

Vegetation and water bodies in northern and central Africa in the Eemian (bottom) and Holocene (top)

The climate throughout the Holocene has shown significant variability despite ice core records from Greenland suggesting a more stable climate following the preceding ice age. Marine chemical fluxes during the Holocene were lower than during the Younger Dryas, but were still considerable enough to imply notable changes in the climate. The Greenland ice core records indicate that climate changes became more regional and had a larger effect on the mid-to-low latitudes and mid-to-high latitudes after ~5600 B.P. During the transition from the last glacial to the Holocene, the Huelmo–Mascardi Cold Reversal in the Southern Hemisphere began before the Younger Dryas, and the maximum warmth flowed south to north from 11,000 to 7,000 years ago. It appears that this was influenced by the residual glacial ice remaining in the Northern Hemisphere until the later date.

The Holocene climatic optimum (HCO) was a period of warming throughout the globe. It has been suggested that the warming was not uniform across the world. Ice core measurements imply that the sea surface temperature (SST) gradient east of New Zealand, across the subtropical front (STF), was around 2 degrees Celsius. This temperature gradient is significantly less than modern times, which is around 6 degrees Celsius. A study utilizing five SST proxies from 37°S to 60°S latitude confirmed that the strong temperature gradient was confined to the area immediately south of the STF, and is correlated with reduced westerly winds near New Zealand. From the 10th-14th century, the climate was similar to that of modern times during a period known as the Medieval climate optimum, or the Medieval warm period (MWP). It was found that the warming that is taking place in current years is both more frequent and more spatially homogeneous than what was experienced during the MWP. A warming of +1 degree Celsius occurs 5–40 times more frequently in modern years than during the MWP. The major forcing during the MWP was due to greater solar activity, which led to heterogeneity compared to the greenhouse gas forcing of modern years that leads to more homogeneous warming. This was followed by the Little Ice Age, from the 13th or 14th century to the mid-19th century.

The temporal and spatial extent of climate change during the Holocene is an area of considerable uncertainty, with radiative forcing recently proposed to be the origin of cycles identified in the North Atlantic region. Climate cyclicity through the Holocene (Bond events) has been observed in or near marine settings and is strongly controlled by glacial input to the North Atlantic. Periodicities of ≈2500, ≈1500, and ≈1000 years are generally observed in the North Atlantic. At the same time spectral analyses of the continental record, which is remote from oceanic influence, reveal persistent periodicities of 1,000 and 500 years that may correspond to solar activity variations during the Holocene Epoch. A 1,500-year cycle corresponding to the North Atlantic oceanic circulation may have had widespread global distribution in the Late Holocene.

Ecological developments

Animal and plant life have not evolved much during the relatively short Holocene, but there have been major shifts in the richness and abundance of plants and animals. A number of large animals including mammoths and mastodons, saber-toothed cats like Smilodon and Homotherium, and giant sloths went extinct in the late Pleistocene and early Holocene. The extinction of some megafauna in America could be attributed to the Clovis people; this culture was known for "Clovis points" which were fashioned on spears for hunting animals. Shrubs, herbs, and mosses had also changed in relative abundance from the Pleistocene to Holocene, identified by permafrost core samples.

Throughout the world, ecosystems in cooler climates that were previously regional have been isolated in higher altitude ecological "islands".

The 8.2-ka event, an abrupt cold spell recorded as a negative excursion in the δ18O record lasting 400 years, is the most prominent climatic event occurring in the Holocene Epoch, and may have marked a resurgence of ice cover. It has been suggested that this event was caused by the final drainage of Lake Agassiz, which had been confined by the glaciers, disrupting the thermohaline circulation of the Atlantic. This disruption was the result of an ice dam over Hudson Bay collapsing sending cold lake Agassiz water into the North Atlantic ocean. Furthermore, studies show that the melting of Lake Agassiz led to sea-level rise which flooded the North American coastal landscape. The basal peat plant was then used to determine the resulting local sea-level rise of 0.20-0.56m in the Mississippi Delta. Subsequent research, however, suggested that the discharge was probably superimposed upon a longer episode of cooler climate lasting up to 600 years and observed that the extent of the area affected was unclear.

Human developments

Overview map of the world at the end of the 2nd millennium BC, color-coded by cultural stage:
  hunter-gatherers (Palaeolithic or Mesolithic)
  nomadic pastoralists
  simple farming societies
  complex farming societies (Bronze Age (Old World, Olmecs, Andes)
  state societies (Fertile Crescent, Egypt, China)

The beginning of the Holocene corresponds with the beginning of the Mesolithic age in most of Europe. In regions such as the Middle East and Anatolia, the term Epipaleolithic is preferred in place of Mesolithic, as they refer to approximately the same time period. Cultures in this period include Hamburgian, Federmesser, and the Natufian culture, during which the oldest inhabited places still existing on Earth were first settled, such as Tell es-Sultan (Jericho) in the Middle East. There is also evolving archeological evidence of proto-religion at locations such as Göbekli Tepe, as long ago as the 9th millennium BC.

The preceding period of the Late Pleistocene had already brought advancements such as the bow and arrow, creating more efficient forms of hunting and replacing spear throwers. In the Holocene, however, the domestication of plants and animals allowed humans to develop villages and towns in centralized locations. Archaeological data shows that between 10,000 to 7,000 BP rapid domestication of plants and animals took place in tropical and subtropical parts of Asia, Africa, and Central America. The development of farming allowed humans to transition away from hunter-gatherer nomadic cultures, which did not establish permanent settlements, to a more sustainable sedentary lifestyle. This form of lifestyle change allowed humans to develop towns and villages in centralized locations, which gave rise to the world known today. It is believed that the domestication of plants and animals began in the early part of the Holocene in the tropical areas of the planet. Because these areas had warm, moist temperatures, the climate was perfect for effective farming. Culture development and human population change, specifically in South America, has also been linked to spikes in hydroclimate resulting in climate variability in the mid-Holocene (8.2 - 4.2 k cal BP). Climate change on seasonality and available moisture also allowed for favorable agricultural conditions which promoted human development for Maya and Tiwanaku regions.

Extinction event

The Holocene extinction, otherwise referred to as the sixth mass extinction or Anthropocene extinction, is an ongoing extinction event of species during the present Holocene epoch (with the more recent time sometimes called Anthropocene) as a result of human activity. The included extinctions span numerous families of bacteria, fungi, plants and animals, including mammals, birds, reptiles, amphibians, fish and invertebrates. With widespread degradation of highly biodiverse habitats such as coral reefs and rainforests, as well as other areas, the vast majority of these extinctions are thought to be undocumented, as the species are undiscovered at the time of their extinction, or no one has yet discovered their extinction. The current rate of extinction of species is estimated at 100 to 1,000 times higher than natural background extinction rates.

Politics of Europe

From Wikipedia, the free encyclopedia ...