Search This Blog

Monday, September 20, 2021

Kidney failure

From Wikipedia, the free encyclopedia
 
Kidney failure
Other namesRenal failure, end-stage renal disease (ESRD), stage 5 chronic kidney disease
Hemodialysismachine.jpg
A hemodialysis machine which is used to replace the function of the kidneys
SpecialtyNephrology
SymptomsLeg swelling, feeling tired, loss of appetite, confusion
ComplicationsAcute: Uremia, high blood potassium, volume overload
Chronic: Heart disease, high blood pressure, anemia
TypesAcute kidney failure, chronic kidney failure
CausesAcute: Low blood pressure, blockage of the urinary tract, certain medications, muscle breakdown, and hemolytic uremic syndrome.
Chronic: Diabetes, high blood pressure, nephrotic syndrome, polycystic kidney disease
Diagnostic methodAcute: Decreased urine production, increased serum creatinine
Chronic:Glomerular filtration rate (GFR) < 15
TreatmentAcute: Depends on the cause
Chronic: Hemodialysis, peritoneal dialysis, kidney transplant
FrequencyAcute: 3 per 1,000 per year
Chronic: 1 per 1,000 (US)

Kidney failure, also known as end-stage kidney disease, is a medical condition in which the kidneys are functioning at less than 15% of normal levels. Kidney failure is classified as either acute kidney failure, which develops rapidly and may resolve; and chronic kidney failure, which develops slowly and can often be irreversible. Symptoms may include leg swelling, feeling tired, vomiting, loss of appetite, and confusion. Complications of acute and chronic failure include uremia, high blood potassium, and volume overload. Complications of chronic failure also include heart disease, high blood pressure, and anemia.

Causes of acute kidney failure include low blood pressure, blockage of the urinary tract, certain medications, muscle breakdown, and hemolytic uremic syndrome. Causes of chronic kidney failure include diabetes, high blood pressure, nephrotic syndrome, and polycystic kidney disease. Diagnosis of acute failure is often based on a combination of factors such as decreased urine production or increased serum creatinine. Diagnosis of chronic failure is based on a glomerular filtration rate (GFR) of less than 15 or the need for renal replacement therapy. It is also equivalent to stage 5 chronic kidney disease.

Treatment of acute failure depends on the underlying cause. Treatment of chronic failure may include hemodialysis, peritoneal dialysis, or a kidney transplant. Hemodialysis uses a machine to filter the blood outside the body. In peritoneal dialysis specific fluid is placed into the abdominal cavity and then drained, with this process being repeated multiple times per day. Kidney transplantation involves surgically placing a kidney from someone else and then taking immunosuppressant medication to prevent rejection. Other recommended measures from chronic disease include staying active and specific dietary changes.Depression is also common among patients with kidney failure, and is associated with poor outcomes including higher risk of kidney function decline, hospitalization, and death. A recent PCORI-funded study of patients with kidney failure receiving outpatient hemodialysis found similar effectiveness between nonpharmacological and pharmacological treatments for depression.

In the United States acute failure affects about 3 per 1,000 people a year. Chronic failure affects about 1 in 1,000 people with 3 per 10,000 people newly developing the condition each year. Acute failure is often reversible while chronic failure often is not. With appropriate treatment many with chronic disease can continue working.

Classification

Kidney failure can be divided into two categories: acute kidney failure or chronic kidney failure. The type of renal failure is differentiated by the trend in the serum creatinine; other factors that may help differentiate acute kidney failure from chronic kidney failure include anemia and the kidney size on sonography as chronic kidney disease generally leads to anemia and small kidney size.

Acute kidney failure

Acute kidney injury (AKI), previously called acute renal failure (ARF), is a rapidly progressive loss of renal function, generally characterized by oliguria (decreased urine production, quantified as less than 400 mL per day in adults, less than 0.5 mL/kg/h in children or less than 1 mL/kg/h in infants); and fluid and electrolyte imbalance. AKI can result from a variety of causes, generally classified as prerenal, intrinsic, and postrenal. Many people diagnosed with paraquat intoxication experience AKI, sometimes requiring hemodialysis. The underlying cause must be identified and treated to arrest the progress, and dialysis may be necessary to bridge the time gap required for treating these fundamental causes.

Chronic kidney failure

Illustration of a kidney from a person with chronic renal failure

Chronic kidney disease (CKD) can also develop slowly and, initially, show few symptoms. CKD can be the long term consequence of irreversible acute disease or part of a disease progression.

Acute-on-chronic kidney failure

Acute kidney injuries can be present on top of chronic kidney disease, a condition called acute-on-chronic kidney failure (AoCRF). The acute part of AoCRF may be reversible, and the goal of treatment, as with AKI, is to return the person to baseline kidney function, typically measured by serum creatinine. Like AKI, AoCRF can be difficult to distinguish from chronic kidney disease if the person has not been monitored by a physician and no baseline (i.e., past) blood work is available for comparison.

Signs and symptoms

Symptoms can vary from person to person. Someone in early stage kidney disease may not feel sick or notice symptoms as they occur. When the kidneys fail to filter properly, waste accumulates in the blood and the body, a condition called azotemia. Very low levels of azotaemia may produce few, if any, symptoms. If the disease progresses, symptoms become noticeable (if the failure is of sufficient degree to cause symptoms). Kidney failure accompanied by noticeable symptoms is termed uraemia.

Symptoms of kidney failure include the following:

  • High levels of urea in the blood, which can result in:
    • Vomiting or diarrhea (or both) may lead to dehydration
    • Nausea
    • Weight loss
    • Nocturnal urination (nocturia)
    • More frequent urination, or in greater amounts than usual, with pale urine
    • Less frequent urination, or in smaller amounts than usual, with dark coloured urine
    • Blood in the urine
    • Pressure, or difficulty urinating
    • Unusual amounts of urination, usually in large quantities
  • A buildup of phosphates in the blood that diseased kidneys cannot filter out may cause:
  • A buildup of potassium in the blood that diseased kidneys cannot filter out (called hyperkalemia) may cause:
    • Abnormal heart rhythms
    • Muscle paralysis
  • Failure of kidneys to remove excess fluid may cause:
    • Swelling of the hands, legs, ankles, feet, or face
    • Shortness of breath due to extra fluid on the lungs (may also be caused by anemia)
  • Polycystic kidney disease, which causes large, fluid-filled cysts on the kidneys and sometimes the liver, can cause:
    • Pain in the back or side
  • Healthy kidneys produce the hormone erythropoietin that stimulates the bone marrow to make oxygen-carrying red blood cells. As the kidneys fail, they produce less erythropoietin, resulting in decreased production of red blood cells to replace the natural breakdown of old red blood cells. As a result, the blood carries less hemoglobin, a condition known as anemia. This can result in:
    • Feeling tired or weak
    • Memory problems
    • Difficulty concentrating
    • Dizziness
    • Low blood pressure
  • Normally proteins are too large to pass through the kidneys. However they are able to pass through when the glomeruli are damaged. This does not cause symptoms until extensive kidney damage has occurred, after which symptoms include:
    • Foamy or bubbly urine
    • Swelling in the hands, feet, abdomen, and face
  • Other symptoms include:
    • Appetite loss, which may include a bad taste in the mouth
    • Difficulty sleeping
    • Darkening of the skin
    • Excess protein in the blood
    • With high doses of penicillin, people with kidney failure may experience seizures

Causes

Acute kidney injury

Acute kidney injury (previously known as acute renal failure) – or AKI – usually occurs when the blood supply to the kidneys is suddenly interrupted or when the kidneys become overloaded with toxins. Causes of acute kidney injury include accidents, injuries, or complications from surgeries in which the kidneys are deprived of normal blood flow for extended periods of time. Heart-bypass surgery is an example of one such procedure.

Drug overdoses, accidental or from chemical overloads of drugs such as antibiotics or chemotherapy, along with bee stings may also cause the onset of acute kidney injury. Unlike chronic kidney disease, however, the kidneys can often recover from acute kidney injury, allowing the person with AKI to resume a normal life. People suffering from acute kidney injury require supportive treatment until their kidneys recover function, and they often remain at increased risk of developing future kidney failure.

Among the accidental causes of renal failure is the crush syndrome, when large amounts of toxins are suddenly released in the blood circulation after a long compressed limb is suddenly relieved from the pressure obstructing the blood flow through its tissues, causing ischemia. The resulting overload can lead to the clogging and the destruction of the kidneys. It is a reperfusion injury that appears after the release of the crushing pressure. The mechanism is believed to be the release into the bloodstream of muscle breakdown products – notably myoglobin, potassium, and phosphorus – that are the products of rhabdomyolysis (the breakdown of skeletal muscle damaged by ischemic conditions). The specific action on the kidneys is not fully understood, but may be due in part to nephrotoxic metabolites of myoglobin.

Chronic kidney failure

Chronic kidney failure has numerous causes. The most common causes of chronic failure are diabetes mellitus and long-term, uncontrolled hypertension. Polycystic kidney disease is another well-known cause of chronic failure. The majority of people afflicted with polycystic kidney disease have a family history of the disease. Other genetic illnesses cause kidney failure, as well.

Overuse of common drugs such as ibuprofen, and acetaminophen (paracetamol) can also cause chronic kidney failure.

Some infectious disease agents, such as hantavirus, can attack the kidneys, causing kidney failure.

Genetic predisposition

The APOL1 gene has been proposed as a major genetic risk locus for a spectrum of nondiabetic renal failure in individuals of African origin, these include HIV-associated nephropathy (HIVAN), primary nonmonogenic forms of focal segmental glomerulosclerosis, and hypertension affiliated chronic kidney disease not attributed to other etiologies. Two western African variants in APOL1 have been shown to be associated with end stage kidney disease in African Americans and Hispanic Americans.

Diagnostic approach

Measurement for CKD

Stages of kidney failure

Chronic kidney failure is measured in five stages, which are calculated using the person's GFR, or glomerular filtration rate. Stage 1 CKD is mildly diminished renal function, with few overt symptoms. Stages 2 and 3 need increasing levels of supportive care from their medical providers to slow and treat their renal dysfunction. People with stage 4 and 5 kidney failure usually require preparation towards active treatment in order to survive. Stage 5 CKD is considered a severe illness and requires some form of renal replacement therapy (dialysis) or kidney transplant whenever feasible.

Glomerular filtration rate

A normal GFR varies according to many factors, including sex, age, body size and ethnic background. Renal professionals consider the glomerular filtration rate (GFR) to be the best overall index of kidney function. The National Kidney Foundation offers an easy to use on-line GFR calculator for anyone who is interested in knowing their glomerular filtration rate. (A serum creatinine level, a simple blood test, is needed to use the calculator.)

Use of the term uremia

Before the advancement of modern medicine, renal failure was often referred to as uremic poisoning. Uremia was the term for the contamination of the blood with urea. It is the presence of an excessive amount of urea in blood. Starting around 1847, this included reduced urine output, which was thought to be caused by the urine mixing with the blood instead of being voided through the urethra. The term uremia is now used for the illness accompanying kidney failure.

Treatment

The treatment of acute kidney injury depends on the cause. The treatment of chronic kidney failure may include renal replacement therapy: hemodialysis, peritoneal dialysis, or kidney transplant

Diet

In non-diabetics and people with type 1 diabetes, a low protein diet is found to have a preventive effect on progression of chronic kidney disease. However, this effect does not apply to people with type 2 diabetes. A whole food, plant-based diet may help some people with kidney disease. A high protein diet from either animal or plant sources appears to have negative effects on kidney function at least in the short term.

Slowing progression

People who receive earlier referrals to a nephrology specialist, meaning a longer time before they must start dialysis, have a shorter initial hospitalization and reduced risk of death after the start of dialysis. Other methods of reducing disease progression include minimizing exposure to nephrotoxins such as NSAIDs and intravenous contrast.

Hemodialysis

From Wikipedia, the free encyclopedia
 
Hemodialysis
Hemodialysismachine.jpg
Hemodialysis machine
Other nameskidney dialysis
Specialtynephrology

Hemodialysis, also spelled haemodialysis, or simply dialysis, is a process of purifying the blood of a person whose kidneys are not working normally. This type of dialysis achieves the extracorporeal removal of waste products such as creatinine and urea and free water from the blood when the kidneys are in a state of kidney failure. Hemodialysis is one of three renal replacement therapies (the other two being kidney transplant and peritoneal dialysis). An alternative method for extracorporeal separation of blood components such as plasma or cells is apheresis.

Hemodialysis can be an outpatient or inpatient therapy. Routine hemodialysis is conducted in a dialysis outpatient facility, either a purpose built room in a hospital or a dedicated, stand-alone clinic. Less frequently hemodialysis is done at home. Dialysis treatments in a clinic are initiated and managed by specialized staff made up of nurses and technicians; dialysis treatments at home can be self-initiated and managed or done jointly with the assistance of a trained helper who is usually a family member.

Medical uses

Hemodialysis in progress

Hemodialysis is the choice of renal replacement therapy for patients who need dialysis acutely, and for many patients as maintenance therapy. It provides excellent, rapid clearance of solutes.

A nephrologist (a medical kidney specialist) decides when hemodialysis is needed and the various parameters for a dialysis treatment. These include frequency (how many treatments per week), length of each treatment, and the blood and dialysis solution flow rates, as well as the size of the dialyzer. The composition of the dialysis solution is also sometimes adjusted in terms of its sodium, potassium, and bicarbonate levels. In general, the larger the body size of an individual, the more dialysis he/she will need. In North America and the UK, 3–4 hour treatments (sometimes up to 5 hours for larger patients) given 3 times a week are typical. Twice-a-week sessions are limited to patients who have a substantial residual kidney function. Four sessions per week are often prescribed for larger patients, as well as patients who have trouble with fluid overload. Finally, there is growing interest in short daily home hemodialysis, which is 1.5 – 4 hr sessions given 5–7 times per week, usually at home. There is also interest in nocturnal dialysis, which involves dialyzing a patient, usually at home, for 8–10 hours per night, 3–6 nights per week. Nocturnal in-center dialysis, 3–4 times per week, is also offered at a handful of dialysis units in the United States.

Adverse effects

Disadvantages

  • Restricts independence, as people undergoing this procedure cannot travel around because of supplies' availability
  • Requires more supplies such as high water quality and electricity
  • Requires reliable technology like dialysis machines
  • The procedure is complicated and requires that care givers have more knowledge
  • Requires time to set up and clean dialysis machines, and expense with machines and associated staff

Complications

Fluid shifts

Hemodialysis often involves fluid removal (through ultrafiltration), because most patients with renal failure pass little or no urine. Side effects caused by removing too much fluid and/or removing fluid too rapidly include low blood pressure, fatigue, chest pains, leg-cramps, nausea and headaches. These symptoms can occur during the treatment and can persist post treatment; they are sometimes collectively referred to as the dialysis hangover or dialysis washout. The severity of these symptoms is usually proportionate to the amount and speed of fluid removal. However, the impact of a given amount or rate of fluid removal can vary greatly from person to person and day to day. These side effects can be avoided and/or their severity lessened by limiting fluid intake between treatments or increasing the dose of dialysis e.g. dialyzing more often or longer per treatment than the standard three times a week, 3–4 hours per treatment schedule.

Access-related

Since hemodialysis requires access to the circulatory system, patients undergoing hemodialysis may expose their circulatory system to microbes, which can lead to bacteremia, an infection affecting the heart valves (endocarditis) or an infection affecting the bones (osteomyelitis). The risk of infection varies depending on the type of access used (see below). Bleeding may also occur, again the risk varies depending on the type of access used. Infections can be minimized by strictly adhering to infection control best practices.

Venous needle dislodgement

Venous needle dislodgement (VND) is a fatal complication of hemodialysis where the patient suffers rapid blood loss due to a faltering attachment of the needle to the venous access point.

Anticoagulation-related

Unfractioned heparin (UHF) is the most commonly used anticoagulant in hemodialysis, as it is generally well tolerated and can be quickly reversed with protamine sulfate. Low-molecular weight heparin (LMWH) is however, becoming increasingly popular and is now the norm in western Europe. Compared to UHF, LMWH has the advantage of an easier mode of administration and reduced bleeding but the effect cannot be easily reversed. Heparin can infrequently cause a low platelet count due to a reaction called heparin-induced thrombocytopenia (HIT). In such patients, alternative anticoagulants may be used. The risk of HIT is lower with LMWH compared to UHF. Even though HIT causes a low platelet count it can paradoxically predispose thrombosis. In patients at high risk of bleeding, dialysis can be done without anticoagulation.

First-use syndrome

First-use syndrome is a rare but severe anaphylactic reaction to the artificial kidney. Its symptoms include sneezing, wheezing, shortness of breath, back pain, chest pain, or sudden death. It can be caused by residual sterilant in the artificial kidney or the material of the membrane itself. In recent years, the incidence of first-use syndrome has decreased, due to an increased use of gamma irradiation, steam sterilization, or electron-beam radiation instead of chemical sterilants, and the development of new semipermeable membranes of higher biocompatibility. New methods of processing previously acceptable components of dialysis must always be considered. For example, in 2008, a series of first-use type of reactions, including deaths, occurred due to heparin contaminated during the manufacturing process with oversulfated chondroitin sulfate.

Cardiovascular

Longterm complications of hemodialysis include hemodialysis-associated amyloidosis, neuropathy and various forms of heart disease. Increasing the frequency and length of treatments has been shown to improve fluid overload and enlargement of the heart that is commonly seen in such patients. Due to these complications, the prevalence of complementary and alternative medicine use is high among patients undergoing hemodialysis.

Vitamin deficiency

Folate deficiency can occur in some patients having hemodialysis.

Electrolyte imbalances

Although a dyalisate fluid, which is a solution containing diluted electrolytes, is employed for the filtration of blood, haemodialysis can cause an electrolyte imbalance. These imbalances can derive from abnormal concentrations of potassium (hypokalemia, hyperkalemia), and sodium (hyponatremia, hypernatremia). These electrolyte imbalances are associated with increased cardiovascular mortality.

Mechanism and technique

Semipermeable membrane

The principle of hemodialysis is the same as other methods of dialysis; it involves diffusion of solutes across a semipermeable membrane. Hemodialysis utilizes counter current flow, where the dialysate is flowing in the opposite direction to blood flow in the extracorporeal circuit. Counter-current flow maintains the concentration gradient across the membrane at a maximum and increases the efficiency of the dialysis.

Fluid removal (ultrafiltration) is achieved by altering the hydrostatic pressure of the dialysate compartment, causing free water and some dissolved solutes to move across the membrane along a created pressure gradient.

The dialysis solution that is used may be a sterilized solution of mineral ions and is called dialysate. Urea and other waste products including potassium, and phosphate diffuse into the dialysis solution. However, concentrations of sodium and chloride are similar to those of normal plasma to prevent loss. Sodium bicarbonate is added in a higher concentration than plasma to correct blood acidity. A small amount of glucose is also commonly used. The concentration of electrolytes in the dialysate is adjusted depending on the patient's status before the dialysis. If a high concentration of sodium is added to the dialysate, the patient can become thirsty and end up accumulating body fluids, which can lead to heart damage. On the contrary, low concentrations of sodium in the dialysate solution have been associated with a low blood pressure and intradialytic weight gain, which are markers of improved outcomes. However, the benefits of using a low concentration of sodium have not been demonstrated yet, since these patients can also suffer from cramps, intradialytic hypotension and low sodium in serum, which are symptoms associated with a high mortality risk.

Note that this is a different process to the related technique of hemofiltration.

Access

Three primary methods are used to gain access to the blood for hemodialysis: an intravenous catheter, an arteriovenous fistula (AV) and a synthetic graft. The type of access is influenced by factors such as the expected time course of a patient's renal failure and the condition of their vasculature. Patients may have multiple access procedures, usually because an AV fistula or graft is maturing and a catheter is still being used. The placement of a catheter is usually done under light sedation, while fistulas and grafts require an operation.

Permacath for dialysis

Types

There are three types of hemodialysis: conventional hemodialysis, daily hemodialysis, and nocturnal hemodialysis. Below is an adaptation and summary from a brochure of The Ottawa Hospital.

Conventional hemodialysis

Conventional hemodialysis is usually done three times per week, for about three to four hours for each treatment (Sometimes five hours for larger patients), during which the patient's blood is drawn out through a tube at a rate of 200–400 mL/min. The tube is connected to a 15, 16, or 17 gauge needle inserted in the dialysis fistula or graft, or connected to one port of a dialysis catheter. The blood is then pumped through the dialyzer, and then the processed blood is pumped back into the patient's bloodstream through another tube (connected to a second needle or port). During the procedure, the patient's blood pressure is closely monitored, and if it becomes low, or the patient develops any other signs of low blood volume such as nausea, the dialysis attendant can administer extra fluid through the machine. During the treatment, the patient's entire blood volume (about 5000 cc) circulates through the machine every 15 minutes. During this process, the dialysis patient is exposed to a week's worth of water for the average person.

Daily hemodialysis

Daily hemodialysis is typically used by those patients who do their own dialysis at home. It is less stressful (more gentle) but does require more frequent access. This is simple with catheters, but more problematic with fistulas or grafts. The "buttonhole technique" can be used for fistulas requiring frequent access. Daily hemodialysis is usually done for 2 hours six days a week.

Nocturnal hemodialysis

The procedure of nocturnal hemodialysis is similar to conventional hemodialysis except it is performed three to six nights a week and between six and ten hours per session while the patient sleeps.

Equipment

Schematic of a hemodialysis circuit

The hemodialysis machine pumps the patient's blood and the dialysate through the dialyzer. The newest dialysis machines on the market are highly computerized and continuously monitor an array of safety-critical parameters, including blood and dialysate flow rates; dialysis solution conductivity, temperature, and pH; and analysis of the dialysate for evidence of blood leakage or presence of air. Any reading that is out of normal range triggers an audible alarm to alert the patient-care technician who is monitoring the patient. Manufacturers of dialysis machines include companies such as Nipro, Fresenius, Gambro, Baxter, B. Braun, NxStage and Bellco.

Water system

A hemodialysis unit's dialysate solution tanks

An extensive water purification system is absolutely critical for hemodialysis. Since dialysis patients are exposed to vast quantities of water, which is mixed with dialysate concentrate to form the dialysate, even trace mineral contaminants or bacterial endotoxins can filter into the patient's blood. Because the damaged kidneys cannot perform their intended function of removing impurities, ions introduced into the bloodstream via water can build up to hazardous levels, causing numerous symptoms or death. Aluminum, chloramine, fluoride, copper, and zinc, as well as bacterial fragments and endotoxins, have all caused problems in this regard.

For this reason, water used in hemodialysis is carefully purified before use. Initially, it is filtered and temperature-adjusted and its pH is corrected by adding an acid or base. Chemical buffers such as bicarbonate and lactate can alternatively be added to regulate the pH of the dialysate. Both buffers can stabilise the pH of the solution at a physiological level with no negative impacts on the patient. There is some evidence of a reduction in the incidence of heart and blood problems and high blood pressure events when using bicarbonate as the pH buffer compared to lactate. However, the mortality rates after using both buffers do not show a significative difference.

The dialysate solution is then softened. Next the water is run through a tank containing activated charcoal to adsorb organic contaminants. Primary purification is then done by forcing water through a membrane with very tiny pores, a so-called reverse osmosis membrane. This lets the water pass, but holds back even very small solutes such as electrolytes. Final removal of leftover electrolytes is done by passing the water through a tank with ion-exchange resins, which remove any leftover anions or cations and replace them with hydroxyl and hydrogen ions, respectively, leaving ultrapure water.

Even this degree of water purification may be insufficient. The trend lately is to pass this final purified water (after mixing with dialysate concentrate) through a dialyzer membrane. This provides another layer of protection by removing impurities, especially those of bacterial origin, that may have accumulated in the water after its passage through the original water purification system.

Once purified water is mixed with dialysate concentrate, its conductivity increases, since water that contains charged ions conducts electricity. During dialysis, the conductivity of dialysis solution is continuously monitored to ensure that the water and dialysate concentrate are being mixed in the proper proportions. Both excessively concentrated dialysis solution and excessively dilute solution can cause severe clinical problems.

Dialyzer

The dialyzer is the piece of equipment that actually filters the blood. Almost all dialyzers in use today are of the hollow-fiber variety. A cylindrical bundle of hollow fibers, whose walls are composed of semi-permeable membrane, is anchored at each end into potting compound (a sort of glue). This assembly is then put into a clear plastic cylindrical shell with four openings. One opening or blood port at each end of the cylinder communicates with each end of the bundle of hollow fibers. This forms the "blood compartment" of the dialyzer. Two other ports are cut into the side of the cylinder. These communicate with the space around the hollow fibers, the "dialysate compartment." Blood is pumped via the blood ports through this bundle of very thin capillary-like tubes, and the dialysate is pumped through the space surrounding the fibers. Pressure gradients are applied when necessary to move fluid from the blood to the dialysate compartment.

Membrane and flux

Dialyzer membranes come with different pore sizes. Those with smaller pore size are called "low-flux" and those with larger pore sizes are called "high-flux." Some larger molecules, such as beta-2-microglobulin, are not removed at all with low-flux dialyzers; lately, the trend has been to use high-flux dialyzers. However, such dialyzers require newer dialysis machines and high-quality dialysis solution to control the rate of fluid removal properly and to prevent backflow of dialysis solution impurities into the patient through the membrane.

Dialyzer membranes used to be made primarily of cellulose (derived from cotton linter). The surface of such membranes was not very biocompatible, because exposed hydroxyl groups would activate complement in the blood passing by the membrane. Therefore, the basic, "unsubstituted" cellulose membrane was modified. One change was to cover these hydroxyl groups with acetate groups (cellulose acetate); another was to mix in some compounds that would inhibit complement activation at the membrane surface (modified cellulose). The original "unsubstituted cellulose" membranes are no longer in wide use, whereas cellulose acetate and modified cellulose dialyzers are still used. Cellulosic membranes can be made in either low-flux or high-flux configuration, depending on their pore size.

Another group of membranes is made from synthetic materials, using polymers such as polyarylethersulfone, polyamide, polyvinylpyrrolidone, polycarbonate, and polyacrylonitrile. These synthetic membranes activate complement to a lesser degree than unsubstituted cellulose membranes. However, they are in general more hydrophobic which leads to increased adsorption of proteins to the membrane surface which in turn can lead to complement system activation. Synthetic membranes can be made in either low- or high-flux configuration, but most are high-flux.

Nanotechnology is being used in some of the most recent high-flux membranes to create a uniform pore size. The goal of high-flux membranes is to pass relatively large molecules such as beta-2-microglobulin (MW 11,600 daltons), but not to pass albumin (MW ~66,400 daltons). Every membrane has pores in a range of sizes. As pore size increases, some high-flux dialyzers begin to let albumin pass out of the blood into the dialysate. This is thought to be undesirable, although one school of thought holds that removing some albumin may be beneficial in terms of removing protein-bound uremic toxins.

Membrane flux and outcome

Whether using a high-flux dialyzer improves patient outcomes is somewhat controversial, but several important studies have suggested that it has clinical benefits. The NIH-funded HEMO trial compared survival and hospitalizations in patients randomized to dialysis with either low-flux or high-flux membranes. Although the primary outcome (all-cause mortality) did not reach statistical significance in the group randomized to use high-flux membranes, several secondary outcomes were better in the high-flux group. A recent Cochrane analysis concluded that benefit of membrane choice on outcomes has not yet been demonstrated. A collaborative randomized trial from Europe, the MPO (Membrane Permeabilities Outcomes) study, comparing mortality in patients just starting dialysis using either high-flux or low-flux membranes, found a nonsignificant trend to improved survival in those using high-flux membranes, and a survival benefit in patients with lower serum albumin levels or in diabetics.

Membrane flux and beta-2-microglobulin amyloidosis

High-flux dialysis membranes and/or intermittent on-line hemodiafiltration (IHDF) may also be beneficial in reducing complications of beta-2-microglobulin accumulation. Because beta-2-microglobulin is a large molecule, with a molecular weight of about 11,600 daltons, it does not pass at all through low-flux dialysis membranes. Beta-2-M is removed with high-flux dialysis, but is removed even more efficiently with IHDF. After several years (usually at least 5–7), patients on hemodialysis begin to develop complications from beta-2-M accumulation, including carpal tunnel syndrome, bone cysts, and deposits of this amyloid in joints and other tissues. Beta-2-M amyloidosis can cause very serious complications, including spondyloarthropathy, and often is associated with shoulder joint problems. Observational studies from Europe and Japan have suggested that using high-flux membranes in dialysis mode, or IHDF, reduces beta-2-M complications in comparison to regular dialysis using a low-flux membrane.

Dialyzer size and efficiency

Dialyzers come in many different sizes. A larger dialyzer with a larger membrane area (A) will usually remove more solutes than a smaller dialyzer, especially at high blood flow rates. This also depends on the membrane permeability coefficient K0 for the solute in question. So dialyzer efficiency is usually expressed as the K0A – the product of permeability coefficient and area. Most dialyzers have membrane surface areas of 0.8 to 2.2 square meters, and values of K0A ranging from about 500 to 1500 mL/min. K0A, expressed in mL/min, can be thought of as the maximum clearance of a dialyzer at very high blood and dialysate flow rates.

Reuse of dialyzers

The dialyzer may either be discarded after each treatment or be reused. Reuse requires an extensive procedure of high-level disinfection. Reused dialyzers are not shared between patients. There was an initial controversy about whether reusing dialyzers worsened patient outcomes. The consensus today is that reuse of dialyzers, if done carefully and properly, produces similar outcomes to single use of dialyzers.

Dialyzer Reuse is a practice that has been around since the invention of the product. This practice includes the cleaning of a used dialyzer to be reused multiple times for the same patient. Dialysis clinics reuse dialyzers to become more economical and reduce the high costs of “single-use” dialysis which can be extremely expensive and wasteful. Single used dialyzers are initiated just once and then thrown out creating a large amount of bio-medical waste with no mercy for cost savings. If done right, dialyzer reuse can be very safe for dialysis patients.

There are two ways of reusing dialyzers, manual and automated. Manual reuse involves the cleaning of a dialyzer by hand. The dialyzer is semi-disassembled then flushed repeatedly before being rinsed with water. It is then stored with a liquid disinfectant(PAA) for 18+ hours until its next use. Although many clinics outside the USA use this method, some clinics are switching toward a more automated/streamlined process as the dialysis practice advances. The newer method of automated reuse is achieved by means of a medical device that began in the early 1980s. These devices are beneficial to dialysis clinics that practice reuse – especially for large dialysis clinical entities – because they allow for several back to back cycles per day. The dialyzer is first pre-cleaned by a technician, then automatically cleaned by machine through a step-cycles process until it is eventually filled with liquid disinfectant for storage. Although automated reuse is more effective than manual reuse, newer technology has sparked even more advancement in the process of reuse. When reused over 15 times with current methodology, the dialyzer can lose B2m, middle molecule clearance and fiber pore structure integrity, which has the potential to reduce the effectiveness of the patient's dialysis session. Currently, as of 2010, newer, more advanced reprocessing technology has proven the ability to completely eliminate the manual pre-cleaning process altogether and has also proven the potential to regenerate (fully restore) all functions of a dialyzer to levels that are approximately equivalent to single-use for more than 40 cycles. As medical reimbursement rates begin to fall even more, many dialysis clinics are continuing to operate effectively with reuse programs especially since the process is easier and more streamlined than before.

Epidemiology

Hemodialysis was one of the most common procedures performed in U.S. hospitals in 2011, occurring in 909,000 stays (a rate of 29 stays per 10,000 population). This was an increase of 68 percent from 1997, when there were 473,000 stays. It was the fifth most common procedure for patients aged 45–64 years.

History

Many have played a role in developing dialysis as a practical treatment for renal failure, starting with Thomas Graham of Glasgow, who first presented the principles of solute transport across a semipermeable membrane in 1854. The artificial kidney was first developed by Abel, Rountree, and Turner in 1913, the first hemodialysis in a human being was by Haas (February 28, 1924) and the artificial kidney was developed into a clinically useful apparatus by Kolff in 1943 – 1945. This research showed that life could be prolonged in patients dying of kidney failure.

Willem Kolff was the first to construct a working dialyzer in 1943. The first successfully treated patient was a 67-year-old woman in uremic coma who regained consciousness after 11 hours of hemodialysis with Kolff's dialyzer in 1945. At the time of its creation, Kolff's goal was to provide life support during recovery from acute renal failure. After World War II ended, Kolff donated the five dialyzers he had made to hospitals around the world, including Mount Sinai Hospital, New York. Kolff gave a set of blueprints for his hemodialysis machine to George Thorn at the Peter Bent Brigham Hospital in Boston. This led to the manufacture of the next generation of Kolff's dialyzer, a stainless steel Kolff-Brigham dialysis machine.

According to McKellar (1999), a significant contribution to renal therapies was made by Canadian surgeon Gordon Murray with the assistance of two doctors, an undergraduate chemistry student, and research staff. Murray's work was conducted simultaneously and independently from that of Kolff. Murray's work led to the first successful artificial kidney built in North America in 1945–46, which was successfully used to treat a 26-year-old woman out of a uraemic coma in Toronto. The less-crude, more compact, second-generation "Murray-Roschlau" dialyser was invented in 1952–53, whose designs were stolen by German immigrant Erwin Halstrup, and passed off as his own (the "Halstrup–Baumann artificial kidney").

By the 1950s, Willem Kolff's invention of the dialyzer was used for acute renal failure, but it was not seen as a viable treatment for patients with stage 5 chronic kidney disease (CKD). At the time, doctors believed it was impossible for patients to have dialysis indefinitely for two reasons. First, they thought no man-made device could replace the function of kidneys over the long term. In addition, a patient undergoing dialysis suffered from damaged veins and arteries, so that after several treatments, it became difficult to find a vessel to access the patient's blood.

The original Kolff kidney was not very useful clinically, because it did not allow for removal of excess fluid. Swedish professor Nils Alwall encased a modified version of this kidney inside a stainless steel canister, to which a negative pressure could be applied, in this way effecting the first truly practical application of hemodialysis, which was done in 1946 at the University of Lund. Alwall also was arguably the inventor of the arteriovenous shunt for dialysis. He reported this first in 1948 where he used such an arteriovenous shunt in rabbits. Subsequently, he used such shunts, made of glass, as well as his canister-enclosed dialyzer, to treat 1500 patients in renal failure between 1946 and 1960, as reported to the First International Congress of Nephrology held in Evian in September 1960. Alwall was appointed to a newly created Chair of Nephrology at the University of Lund in 1957. Subsequently, he collaborated with Swedish businessman Holger Crafoord to found one of the key companies that would manufacture dialysis equipment in the past 50 years, Gambro. The early history of dialysis has been reviewed by Stanley Shaldon.

Belding H. Scribner, working with the biomechanical engineer Wayne Quinton, modified the glass shunts used by Alwall by making them from Teflon. Another key improvement was to connect them to a short piece of silicone elastomer tubing. This formed the basis of the so-called Scribner shunt, perhaps more properly called the Quinton-Scribner shunt. After treatment, the circulatory access would be kept open by connecting the two tubes outside the body using a small U-shaped Teflon tube, which would shunt the blood from the tube in the artery back to the tube in the vein.

In 1962, Scribner started the world's first outpatient dialysis facility, the Seattle Artificial Kidney Center, later renamed the Northwest Kidney Centers. Immediately the problem arose of who should be given dialysis, since demand far exceeded the capacity of the six dialysis machines at the center. Scribner decided that he would not make the decision about who would receive dialysis and who would not. Instead, the choices would be made by an anonymous committee, which could be viewed as one of the first bioethics committees.

For a detailed history of successful and unsuccessful attempts at dialysis, including pioneers such as Abel and Roundtree, Haas, and Necheles, see this review by Kjellstrand.

Insect winter ecology

From Wikipedia, the free encyclopedia

Insect winter ecology describes the overwinter survival strategies of insects, which are in many respects more similar to those of plants than to many other animals, such as mammals and birds. Unlike those animals, which can generate their own heat internally (endothermic), insects must rely on external sources to provide their heat (ectothermic). Thus, insects persisting in winter weather must tolerate freezing or rely on other mechanisms to avoid freezing. Loss of enzymatic function and eventual freezing due to low temperatures daily threatens the livelihood of these organisms during winter. Not surprisingly, insects have evolved a number of strategies to deal with the rigors of winter temperatures in places where they would otherwise not survive.

Two broad strategies for winter survival have evolved within Insecta as solutions to their inability to generate significant heat metabolically. Migration is a complete avoidance of the temperatures that pose a threat. An alternative to migration is weathering the cold temperatures present in its normal habitat. Insect cold tolerance is generally separated into two strategies, freeze avoidance and freeze tolerance.

Migration

See: Insect migration

Migration of insects differs from migration of birds. Bird migration is a two-way, round-trip movement of each individual, whereas this is not usually the case with insects. As a consequence of the (typically) short lifespan of insects, adult insects who have completed one leg of the trip may be replaced by a member of the next generation on the return voyage. As a result, invertebrate biologists redefine migration for this group of organisms in three parts:

  1. A persistent, straight line movement away from the natal area
  2. Distinctive pre- and post-movement behaviors
  3. Re-allocation of energy within the body associated with the movement

This definition allows for mass insect movements to be considered as migration. Perhaps the best known insect migration is that of the monarch butterfly. The monarch in North America migrates from as far north as Canada southward to Mexico and Southern California annually from about August to October. The population east of the Rocky Mountains overwinters in Michoacán, Mexico, and the western population overwinters in various sites in central coastal California, notably in Pacific Grove and Santa Cruz. The round trip journey is typically around 3,600 km in length. The longest one-way flight on record for monarchs is 3,009 km from Ontario, Canada to San Luis Potosí, Mexico. They use the direction of sunlight and magnetic cues to orient themselves during migration.

The monarch requires significant energy to make such a long flight, which is provided by fat reserves. When they reach their overwintering sites, they begin a period of lowered metabolic rate. Nectar from flowers procured at the overwintering site provides energy for the northward migration. To limit their energy use, monarchs congregate in large clusters in order to maintain a suitable temperature. This strategy, similar to huddling in small mammals, makes use of body heat from all the organisms and lowers heat loss.

Another common winter migrant insect, found in much of North America, South America, and the Caribbean, is the Green Darner. Migration patterns in this species are much less studied than those of monarchs. Green darners leave their northern ranges in September and migrate south. Studies have noted a seasonal influx of green darners to southern Florida, which indicates migratory behavior. Little has been done with tracking of the green darner, and reasons for migration are not fully understood since there are both resident and migrant populations. The common cue for migration southward in this species is the onset of winter.

Cold tolerance

Insects that do not migrate from regions with the onset of colder temperatures must devise strategies to either tolerate or avoid lethal freezing of intracellular and extracellular body fluids. Insects that survive subfreezing temperatures are generally classified as freeze-avoidant or freeze-tolerant. The general strategy adopted by insects differs between the northern hemisphere and the southern hemisphere. In temperate regions of the northern hemisphere where cold temperatures are expected seasonally and are usually for long periods of time, the main strategy is freeze avoidance. In temperate regions of the southern hemisphere, where seasonal cold temperatures are not as extreme or long lasting, freeze tolerance is more common. However, in the Arctic, where freezing occurs seasonally, and for extended periods (>9 months), freeze tolerance also predominates.

Dangers of freezing

Intracellular ice formation usually causes cell death, even in freeze-tolerant species, due to physical stresses exerted as ice crystals expand. Ice formation in extracellular spaces increases the concentration of solutes in the extracellular fluid, resulting in the osmotic flow of water from intracellular spaces to extracellular spaces. Changes in solute concentration and dehydration can cause changes in enzyme activity and lead to the denaturation of proteins. If the temperature continues to decrease, the water that was drawn out of cells will also freeze, causing further cell shrinkage. Excessive cell shrinkage is dangerous because as ice forms outside the cell, the possible shapes that can be assumed by the cells are increasingly limited, causing damaging deformation. Finally, the expansion of ice within vessels and other spaces can cause physical damage to structures and tissues.

Freeze avoidance

Freeze-avoidant insects cannot tolerate internal ice formation, so they avoid freezing by depressing the temperature at which their body fluids freeze. This is done through supercooling, the process by which a liquid cools below its freezing point without changing phase into a solid. In order for water to freeze, a nucleus must be present upon which an ice crystal can begin to grow. At low temperatures, nuclei may arise spontaneously from clusters of slow-moving water molecules. Alternatively, substances that facilitate the aggregation of water molecules can increase the probability that they will reach the critical size necessary for ice formation. If no source of nucleation is introduced, water can cool down to −42°C without freezing. Therefore, when an insect maintains its body fluids in a supercooled state, there is the risk that spontaneous ice nucleation will occur. The temperature at which an insect spontaneously freezes is referred to as the supercooling point (SCP). For freeze avoidant insects, the SCP is thought to be equivalent to the lower lethal temperature (LLT) of the organism.

The freezing process is usually initiated extracellularly in the gut, tissues, or hemolymph. In order to supercool to lower temperatures, freeze-avoidant insects will remove or inactivate ice-nucleating agents (INAs) such as food particles, dust particles, and bacteria, found in the gut or intracellular compartments of these organisms. Removal of ice-nucleating material from the gut can be achieved by cessation in feeding, clearing the gut, and removing lipoprotein ice nucleators (LPINs) from the haemolymph.

Freezing can also be initiated by external contact with ice (inoculative freezing). Thus, some insects avoid freezing by selecting a dry hibernation site in which no ice nucleation from an external source can occur. Insects may also have a physical barrier such as a wax-coated cuticle that provides protection against external ice across the cuticle. The stage of development at which an insect over-winters varies across species, but can occur at any point of the life cycle (i.e., egg, pupa, larva, and adult). Some species of Collembola tolerate extreme cold by the shedding of the mid-gut during moulting.

Overwintering lesser stag beetle larva

In addition to physical preparations for winter, many insects also alter their biochemistry and metabolism. For example, some insects synthesize cryoprotectants such as polyols and sugars, which reduce the whole body SCP. Although polyols such as sorbitol, mannitol, and ethylene glycol can also be found, glycerol is by far the most common cryoprotectant and can be equivalent to ~20% of the total body mass. Glycerol is distributed uniformly throughout the head, the thorax, and the abdomen of insects, and is in equal concentration in intracellular and extracellular compartments. The depressive effect of glycerol on the supercooling point is thought to be due to the high viscosity of glycerol solutions at low temperatures. This would inhibit INA activity and SCPs would drop far below the environmental temperature. At colder temperatures (below 0 °C), glycogen production is inhibited, and the breakdown of glycogen into glycerol is enhanced, resulting in the glycerol levels in freeze-avoidant insects reaching levels five times higher than those in freeze tolerant insects which do not need to cope with extended periods of cold temperatures.

Though not all freeze-avoidant insects produce polyols, all hibernating insects produce thermal hysteresis factors (THFs). For example, the haemolymph of the mealworm beetle Tenebrio molitor contains a family of such proteins. A seasonal photoperiodic timing mechanism is responsible for increasing the antifreeze protein levels with concentrations reaching their highest in the winter. In the pyrochroid beetle, Dendroides canadensis, a short photoperiod of 8 hours light and 16 hours of darkness, results in the highest levels of THFs, which corresponds with the shortening of daylight hours associated with winter. These antifreeze proteins are thought to stabilize SCPs by binding directly to the surface structures of the ice crystals themselves, diminishing crystal size and growth. Therefore, instead of acting to change the biochemistry of the bodily fluids as seen with cryoprotectants, THFs act directly with the ice crystals by adsorbing to the developing crystals to inhibit their growth and reduce the chance of lethal freezing occurring.

Freeze tolerance

Freeze tolerance in insects refers to the ability of some species to survive ice formation within their tissues. Insects that have evolved freeze-tolerance strategies manage to avoid tissue damage by controlling where, when, and to what extent ice forms. In contrast to freeze avoiding insects that are able to exist in cold conditions by supercooling, freeze tolerant insects limit supercooling and initiate the freezing of their body fluids at relatively high temperatures. Some insects accomplish this through inoculative freezing, while others produce cryoprotectants to control the rate of ice formation. Freezing at higher temperatures is advantageous because the rate of ice formation is slower, allowing the insect time to adjust to the internal changes that result from ice formation.

Most freeze-tolerant species restrict ice formation to extracellular spaces, as intracellular ice formation is usually lethal. Some species, however, are able to tolerate intracellular freezing. This was first discovered in the fat body cells of the goldenrod gall fly Eurosta solidaginis. The fat body is an insect tissue that is important for lipid, protein and carbohydrate metabolism (analogous to the mammalian liver). Although it is not certain why intracellular freezing is restricted to the fat body tissue in some insects, there is evidence that it may be due to the low water content within fat body cells.

Although freeze-avoidance strategies predominate in the insects, freeze tolerance has evolved at least six times within this group (in the Lepidoptera, Blattodea, Diptera, Orthoptera, Coleoptera, and Hymenoptera). Examples of freeze tolerant insects include: the woolly bear, Pyrrharctia isabella; the flightless midge, Belgica antarctica; the alpine tree weta, Hemideina maori; and the alpine cockroach, Celatoblatta quinquemaculata.

Freeze tolerance is more prevalent in insects from the Southern Hemisphere (reported in 85% of species studied) than it is in insects from the Northern Hemisphere (reported in 29% of species studied). It has been suggested that this may be due to the Southern Hemisphere's greater climate variability, where insects must be able to survive sudden cold snaps yet take advantage of unseasonably warm weather as well. This is in contrast to the Northern Hemisphere, where predictable weather makes it more advantageous to overwinter after extensive seasonal cold hardening.

Ice nucleators

Freeze-tolerant insects are known to produce ice nucleating proteins. The regulated production of ice nucleating proteins allows insects to control the formation of ice crystals within their bodies. The lower an insect's body temperature, the more likely it is that ice will begin to form spontaneously. Even freeze-tolerant animals cannot tolerate a sudden, total freeze; for most freeze-tolerant insects it is important that they avoid supercooling and initiate ice formation at relatively warm temperatures. This allows the insect to moderate the rate of ice growth, adjust more slowly to the mechanical and osmotic pressures imposed by ice formation.

Nucleating proteins may be produced by the insect, or by microorganisms that have become associated with the insects' tissues. These microorganisms possess proteins within their cell walls that function as nuclei for ice growth.

The temperature that a particular ice nucleator initiates freezing varies from molecule to molecule. Although an organism may possess a number of different ice nucleating proteins, only those that initiate freezing at the highest temperature will catalyze an ice nucleation event. Once freezing is initiated, ice will spread throughout the insect's body.

Cryoprotectants

The formation of ice in the extracellular fluid causes an overall movement of water out of cells, a phenomenon known as osmosis. As too much dehydration can be dangerous to cells, many insects possess high concentrations of solutes such as glycerol. Glycerol is a relatively polar molecule and therefore attracts water molecules, shifting the osmotic balance and holding some water inside the cells. As a result, cryoprotectants like glycerol decrease the amount of ice that forms outside of cells and reduce cellular dehydration. Insect cryoprotectants are also important for species that avoid freezing; see description above.

Locations of hibernating insects

Insects are well hidden in winter, but there are several locations in which they can reliably be found. Ladybugs practice communal hibernation by stacking one on top of one another on stumps and under rocks to share heat and buffer themselves against winter temperatures. The female grasshopper (family Tettigoniidae [long-horned]), in an attempt to keep her eggs safe through the winter, tunnels into the soil and deposits her eggs as deep as possible in the ground. Many other insects, including various butterflies and moths also overwinter in soil in the egg stage. Some adult beetles hibernate underground during winter; many flies overwinter in the soil as pupae. The western malaria mosquito overwinters as adults, traveling between multiple human structures throughout the winter. Other methods of hibernation include the inhabitance of bark, where insects nest more toward the southern side of the tree for heat provided by the sun. Cocoons, galls, and parasitism are also common methods of hibernation.

Aquatic insects

Insects that live under the water have different strategies for dealing with freezing than terrestrial insects do. Many insect species survive winter not as adults on land, but as larvae underneath the surface of the water. Under the water many benthic invertebrates will experience some subfreezing temperatures, especially in small streams. Aquatic insects have developed freeze tolerance much like their terrestrial counterparts. However, freeze avoidance is not an option for aquatic insects as the presence of ice in their surroundings may cause ice nucleation in their tissues. Aquatic insects have supercooling points typically around −3º to −7°C. In addition to using freeze tolerance, many aquatic insects migrate deeper into the water body where the temperatures are higher than at the surface. Insects such as stoneflies, mayflies, caddisflies, and dragonflies are common overwintering aquatic insects. The dance fly larvae have the lowest reported supercooling point for an aquatic insect at −22°C.


Cryogenics

From Wikipedia, the free encyclopedia

This is a diagram of an infrared space telescope, that needs a cold mirror and instruments. One instrument needs to be even colder, and it has a cryocooler. The instrument is in region 1 and its cryocooler is in region 3 in a warmer region of the spacecraft. (see MIRI (Mid-Infrared Instrument) or James Webb Space Telescope)
 
A medium-sized dewar is being filled with liquid nitrogen by a larger cryogenic storage tank

In physics, cryogenics is the production and behaviour of materials at very low temperatures.

The 13th IIR International Congress of Refrigeration (held in Washington DC in 1971) endorsed a universal definition of “cryogenics” and “cryogenic” by accepting a threshold of 120 K (or –153 °C) to distinguish these terms from the conventional refrigeration. This is a logical dividing line, since the normal boiling points of the so-called permanent gases (such as helium, hydrogen, neon, nitrogen, oxygen, and normal air) lie below −120 °C while the Freon refrigerants, hydrocarbons, and other common refrigerants have boiling points above −120 °C. The U.S. National Institute of Standards and Technology considers the field of cryogenics as that involving temperatures below −180 °C (93 K; −292 °F).

Discovery of superconducting materials with critical temperatures significantly above the boiling point of liquid nitrogen has provided new interest in reliable, low cost methods of producing high temperature cryogenic refrigeration. The term "high temperature cryogenic" describes temperatures ranging from above the boiling point of liquid nitrogen, −195.79 °C (77.36 K; −320.42 °F), up to −50 °C (223 K; −58 °F).

Cryogenicists use the Kelvin or Rankine temperature scale, both of which measure from absolute zero, rather than more usual scales such as Celsius which measures from the freezing point of water at sea level or Fahrenheit with its zero at an arbitrary temperature.

Definitions and distinctions

Cryogenics
The branches of engineering that involve the study of very low temperatures, how to produce them, and how materials behave at those temperatures.
Cryobiology
The branch of biology involving the study of the effects of low temperatures on organisms (most often for the purpose of achieving cryopreservation).
Cryoconservation of animal genetic resources
The conservation of genetic material with the intention of conserving a breed.
Cryosurgery
The branch of surgery applying cryogenic temperatures to destroy and kill tissue, e.g. cancer cells.
Cryoelectronics
The study of electronic phenomena at cryogenic temperatures. Examples include superconductivity and variable-range hopping.
Cryonics
Cryopreserving humans and animals with the intention of future revival. "Cryogenics" is sometimes erroneously used to mean "Cryonics" in popular culture and the press.[7]

Etymology

The word cryogenics stems from Greek κρύος (cryos) – "cold" + γενής (genis) – "generating".

Cryogenic fluids

Cryogenic fluids with their boiling point in kelvins.

Fluid Boiling point (K)
Helium-3 3.19
Helium-4 4.214
Hydrogen 20.27
Neon 27.09
Nitrogen 77.09
Air 78.8
Fluorine 85.24
Argon 87.24
Oxygen 90.18
Methane 111.7

Industrial applications

Cryogenic valve

Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached.

These liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for extremely cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing.

Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, and will not frost over down to −195 degrees Celsius.

Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves.

Cryogenic processing

The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Ed Busch. With a background in the heat treating industry, Busch founded a company in Detroit called CryoTech in 1966 which merged with 300 Below in 1999 to become the world's largest and oldest commercial cryogenic processing company. Busch originally experimented with the possibility of increasing the life of metal tools to anywhere between 200% and 400% of the original life expectancy using cryogenic tempering instead of heat treating. This evolved in the late 1990s into the treatment of other parts.

Cryogens, such as liquid nitrogen, are further used for specialty chilling and freezing applications. Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of approximately −100 °C (−148 °F). Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment. The freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at very low temperatures, which makes cryogenic milling (cryomilling) an option for some materials that cannot easily be milled at higher temperatures.

Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating–quenching–tempering cycle. Normally, when an item is quenched, the final temperature is ambient. The only reason for this is that most heat treaters do not have cooling equipment. There is nothing metallurgically significant about ambient temperature. The cryogenic process continues this action from ambient temperature down to −320 °F (140 °R; 78 K; −196 °C). In most instances the cryogenic cycle is followed by a heat tempering procedure. As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, thermal history and/or a tool's particular service application.

The entire process takes 3–4 days.

Fuels

Another use of cryogenics is cryogenic fuels for rockets with liquid hydrogen as the most widely used example. Liquid oxygen (LOX) is even more widely used but as an oxidizer, not a fuel. NASA's workhorse Space Shuttle used cryogenic hydrogen/oxygen propellant as its primary means of getting into orbit. LOX is also widely used with RP-1 kerosene, a non-cryogenic hydrocarbon, such as in the rockets built for the Soviet space program by Sergei Korolev.

Russian aircraft manufacturer Tupolev developed a version of its popular design Tu-154 with a cryogenic fuel system, known as the Tu-155. The plane uses a fuel referred to as liquefied natural gas or LNG, and made its first flight in 1989.

Other applications

Astronomical instruments on the Very Large Telescope are equipped with continuous-flow cooling systems.

Some applications of cryogenics:

  • Nuclear magnetic resonance (NMR) is one of the most common methods to determine the physical and chemical properties of atoms by detecting the radio frequency absorbed and subsequent relaxation of nuclei in a magnetic field. This is one of the most commonly used characterization techniques and has applications in numerous fields. Primarily, the strong magnetic fields are generated by supercooling electromagnets, although there are spectrometers that do not require cryogens. In traditional superconducting solenoids, liquid helium is used to cool the inner coils because it has a boiling point of around 4 K at ambient pressure. Cheap metallic superconductors can be used for the coil wiring. So-called high-temperature superconducting compounds can be made to super conduct with the use of liquid nitrogen, which boils at around 77 K.
  • Magnetic resonance imaging (MRI) is a complex application of NMR where the geometry of the resonances is deconvoluted and used to image objects by detecting the relaxation of protons that have been perturbed by a radio-frequency pulse in the strong magnetic field. This is mostly commonly used in health applications.
  • In large cities, it is difficult to transmit power by overhead cables, so underground cables are used. But underground cables get heated and the resistance of the wire increases, leading to waste of power. Superconductors could be used to increase power throughput, although they would require cryogenic liquids such as nitrogen or helium to cool special alloy-containing cables to increase power transmission. Several feasibility studies have been performed and the field is the subject of an agreement within the International Energy Agency.
Cryogenic gases delivery truck at a supermarket, Ypsilanti, Michigan
  • Cryogenic gases are used in transportation and storage of large masses of frozen food. When very large quantities of food must be transported to regions like war zones, earthquake hit regions, etc., they must be stored for a long time, so cryogenic food freezing is used. Cryogenic food freezing is also helpful for large scale food processing industries.
  • Many infrared (forward looking infrared) cameras require their detectors to be cryogenically cooled.
  • Certain rare blood groups are stored at low temperatures, such as −165 °C, at blood banks.
  • Cryogenics technology using liquid nitrogen and CO2 has been built into nightclub effect systems to create a chilling effect and white fog that can be illuminated with colored lights.
  • Cryogenic cooling is used to cool the tool tip at the time of machining in manufacturing process. It increases the tool life. Oxygen is used to perform several important functions in the steel manufacturing process.
  • Many rockets use cryogenic gases as propellants. These include liquid oxygen, liquid hydrogen, and liquid methane.
  • By freezing the automobile or truck tire in liquid nitrogen, the rubber is made brittle and can be crushed into small particles. These particles can be used again for other items.
  • Experimental research on certain physics phenomena, such as spintronics and magnetotransport properties, requires cryogenic temperatures for the effects to be observed.
  • Certain vaccines must be stored at cryogenic temperatures. For example, the Pfizer–BioNTech COVID-19 vaccine must be stored at temperatures of −90 to −60 °C (−130 to −76 °F). (See cold chain.)

Production

Cryogenic cooling of devices and material is usually achieved via the use of liquid nitrogen, liquid helium, or a mechanical cryocooler (which uses high-pressure helium lines). Gifford-McMahon cryocoolers, pulse tube cryocoolers and Stirling cryocoolers are in wide use with selection based on required base temperature and cooling capacity. The most recent development in cryogenics is the use of magnets as regenerators as well as refrigerators. These devices work on the principle known as the magnetocaloric effect.

Detectors

There are various cryogenic detectors which are used to detect particles.

For cryogenic temperature measurement down to 30 K, Pt100 sensors, a resistance temperature detector (RTD), are used. For temperatures lower than 30 K, it is necessary to use a silicon diode for accuracy.

See also

 

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...