Search This Blog

Thursday, October 31, 2019

Linear no-threshold model

From Wikipedia, the free encyclopedia
 
Different assumptions on the extrapolation of the cancer risk vs. radiation dose to low-dose levels, given a known risk at a high dose:
(A) supra-linearity, (B) linear
(C) linear-quadratic, (D) hormesis
 
The linear no-threshold model (LNT) is a dose-response model used in radiation protection to estimate stochastic health effects such as radiation-induced cancer, genetic mutations and teratogenic effects on the human body due to exposure to ionizing radiation

Stochastic health effects are those that occur by chance, and whose probability is proportional to the dose, but whose severity is independent of the dose. The LNT model assumes there is no lower threshold at which stochastic effects start, and assumes a linear relationship between dose and the stochastic health risk. In other words, LNT assumes that radiation has the potential to cause harm at any dose level, and the sum of several very small exposures is just as likely to cause a stochastic health effect as a single larger exposure of equal dose value. In contrast, deterministic health effects are radiation-induced effects such as acute radiation syndrome, which are caused by tissue damage. Deterministic effects reliably occur above a threshold dose and their severity increases with dose. Because of the inherent differences, LNT is not a model for deterministic effects, which are instead characterized by other types of dose-response relationships. 

LNT is a common model to calculate the probability of radiation-induced cancer both at high doses where epidemiology studies support its application but, controversially, also at low doses, which is a dose region that has a lower predictive statistical confidence. Nonetheless, regulatory bodies commonly use LNT as a basis for regulatory dose limits to protect against stochastic health effects, as found in many public health policies. There are three active (as of 2016) challenges to the LNT model currently being considered by the US Nuclear Regulatory Commission. One was filed by Nuclear Medicine Professor Carol Marcus of UCLA, who calls the LNT model scientific "baloney".

Whether the model describes the reality for small-dose exposures is disputed. It opposes two competing schools of thought: the threshold model, which assumes that very small exposures are harmless, and the radiation hormesis model, which claims that radiation at very small doses can be beneficial. Because the current data are inconclusive, scientists disagree on which model should be used. Pending any definitive answer to these questions and the precautionary principle, the model is sometimes used to quantify the cancerous effect of collective doses of low-level radioactive contaminations, even though it estimates a positive number of excess deaths at levels that would have had zero deaths, or saved lives, in the two other models. Such practice has been condemned by the International Commission on Radiological Protection.

One of the organizations for establishing recommendations on radiation protection guidelines internationally, the UNSCEAR, recommended policies in 2014 that do not agree with the LNT model at exposure levels below background levels. The recommendation states "the Scientific Committee does not recommend multiplying very low doses by large numbers of individuals to estimate numbers of radiation-induced health effects within a population exposed to incremental doses at levels equivalent to or lower than natural background levels." This is a reversal from previous recommendations by the same organization.

The LNT model is sometimes applied to other cancer hazards such as polychlorinated biphenyls in drinking water.

Origins

Increased Risk of Solid Cancer with Dose for A-bomb survivors, from BEIR report. Notably this exposure pathway occurred from essentially a massive spike or pulse of radiation, a result of the brief instant that the bomb exploded, which while somewhat similar to the environment of a CT scan, it is wholly unlike the low dose rate of living in a contaminated area such as Chernobyl, were the dose rate is orders of magnitude smaller. However LNT does not consider dose rate and is an unsubstantiated one size fits all approach based solely on total absorbed dose. When the two environments and cell effects are vastly different. Likewise, it has also been pointed out that bomb survivors inhaled carcinogenic benzopyrene from the burning cities, yet this is not factored in.
 
The association of exposure to radiation with cancer had been observed as early as 1902, six years after the discovery of X-ray by Wilhelm Röntgen and radioactivity by Henri Becquerel. In 1927, Hermann Muller demonstrated that radiation may cause genetic mutation. He also suggested mutation as a cause of cancer. Muller, who received a Nobel Prize for his work on the mutagenic effect of radiation in 1946, asserted in his Nobel Lecture, "The Production of Mutation", that mutation frequency is "directly and simply proportional to the dose of irradiation applied" and that there is "no threshold dose".

The early studies were based on relatively high levels of radiation that made it hard to establish the safety of low level of radiation, and many scientists at that time believed that there may be a tolerance level, and that low doses of radiation may not be harmful. A later study in 1955 on mice exposed to low dose of radiation suggest that they may outlive control animals. The interest in the effect of radiation intensified after the dropping of atomic bombs on Hiroshima and Nagasaki, and studies were conducted on the survivors. Although compelling evidence on the effect of low dosage of radiation was hard to come by, by the late 1940s, the idea of LNT became more popular due to its mathematical simplicity. In 1954, the National Council on Radiation Protection and Measurements (NCRP) introduced the concept of maximum permissible dose. In 1958, United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) assessed the LNT model and a threshold model, but noted the difficulty in acquiring "reliable information about the correlation between small doses and their effects either in individuals or in large populations". The United States Congress Joint Committee on Atomic Energy (JCAE) similarly could not establish if there is a threshold or "safe" level for exposure, nevertheless it introduced the concept of "As Low As Reasonably Achievable" (ALARA). ALARA would become a fundamental principle in radiation protection policy that implicitly accepts the validity of LNT. In 1959, United States Federal Radiation Council (FRC) supported the concept of the LNT extrapolation down to the low dose region in its first report.

By the 1970s, the LNT model had become accepted as the standard in radiation protection practice by a number of bodies. In 1972, the first report of National Academy of Sciences (NAS) Biological Effects of Ionizing Radiation (BEIR), an expert panel who reviewed available peer reviewed literature, supported the LNT model on pragmatic grounds, noting that while "dose-effect relationship for x rays and gamma rays may not be a linear function", the "use of linear extrapolation . . . may be justified on pragmatic grounds as a basis for risk estimation." In its seventh report of 2006, NAS BEIR VII writes, "the committee concludes that the preponderance of information indicates that there will be some risk, even at low doses".

Radiation precautions and public policy

Radiation precautions have led to sunlight being listed as a carcinogen at all sun exposure rates, due to the ultraviolet component of sunlight, with no safe level of sunlight exposure being suggested, following the precautionary LNT model. According to a 2007 study submitted by the University of Ottawa to the Department of Health and Human Services in Washington, D.C., there is not enough information to determine a safe level of sun exposure at this time.

If a particular dose of radiation is found to produce one extra case of a type of cancer in every thousand people exposed, LNT projects that one thousandth of this dose will produce one extra case in every million people so exposed, and that one millionth of the original dose will produce one extra case in every billion people exposed. The conclusion is that any given dose equivalent of radiation will produce the same number of cancers, no matter how thinly it is spread. This allows the summation by dosimeters of all radiation exposure, without taking into consideration dose levels or dose rates.

The model is simple to apply: a quantity of radiation can be translated into a number of deaths without any adjustment for the distribution of exposure, including the distribution of exposure within a single exposed individual. For example, a hot particle embedded in an organ (such as lung) results in a very high dose in the cells directly adjacent to the hot particle, but a much lower whole-organ and whole-body dose. Thus, even if a safe low dose threshold was found to exist at cellular level for radiation-induced mutagenesis, the threshold would not exist for environmental pollution with hot particles, and could not be safely assumed to exist when the distribution of dose is unknown. 

The linear no-threshold model is used to extrapolate the expected number of extra deaths caused by exposure to environmental radiation, and it therefore has a great impact on public policy. The model is used to translate any radiation release, like that from a "dirty bomb", into a number of lives lost, while any reduction in radiation exposure, for example as a consequence of radon detection, is translated into a number of lives saved. When the doses are very low, at natural background levels, in the absence of evidence, the model predicts via extrapolation, new cancers only in a very small fraction of the population, but for a large population, the number of lives is extrapolated into hundreds or thousands, and this can sway public policy.

A linear model has long been used in health physics to set maximum acceptable radiation exposures.
The United States-based National Council on Radiation Protection and Measurements (NCRP), a body commissioned by the United States Congress, recently released a report written by the national experts in the field which states that, radiation's effects should be considered to be proportional to the dose an individual receives, regardless of how small the dose is.

A 1958 analysis of two decades of research on the mutation rate of 1 million lab mice showed that six major hypotheses about ionizing radiation and gene mutation were not supported by data. Its data was used in 1972 by the Biological Effects of Ionizing Radiation I committee to support the LNT model. However, it has been claimed that the data contained a fundamental error that was not revealed to the committee, and would not support the LNT model on the issue of mutations and may suggest a threshold dose rate under which radiation does not produce any mutations. The acceptance of the LNT model has been challenged by a number of scientists, see controversy section below.

Fieldwork

The LNT model and the alternatives to it each have plausible mechanisms that could bring them about, but definitive conclusions are hard to make given the difficulty of doing longitudinal studies involving large cohorts over long periods.

A 2003 review of the various studies published in the authoritative Proceedings of the National Academy of Sciences concludes that "given our current state of knowledge, the most reasonable assumption is that the cancer risks from low doses of x- or gamma-rays decrease linearly with decreasing dose."

A 2005 study of Ramsar, Iran (a region with very high levels of natural background radiation) showed that lung cancer incidence was lower in the high-radiation area than in seven surrounding regions with lower levels of natural background radiation. A fuller epidemiological study of the same region showed no difference in mortality for males, and a statistically insignificant increase for females.

A 2009 study by researchers that looks at Swedish children exposed to fallout from Chernobyl while they were fetuses between 8 and 25 weeks gestation concluded that the reduction in IQ at very low doses was greater than expected, given a simple LNT model for radiation damage, indicating that the LNT model may be too conservative when it comes to neurological damage. However, in medical journals, studies detail that in Sweden in the year of the Chernobyl accident, the birth rate, both increased and shifted to those of "higher maternal age" in 1986. More advanced maternal age in Swedish mothers was linked with a reduction in offspring IQ, in a paper published in 2013. Neurological damage has a different biology than cancer. 

In a 2009 study cancer rates among UK radiation workers were found to increase with higher recorded occupational radiation doses. The doses examined varied between 0 and 500 mSv received over their working lives. These results exclude the possibilities of no increase in risk or that the risk is 2-3 times that for A-bomb survivors with a confidence level of 90%. The cancer risk for these radiation workers was still less than the average for persons in the UK due to the healthy worker effect

A 2009 study focusing on the naturally high background radiation region of Karunagappalli, India concluded: "our cancer incidence study, together with previously reported cancer mortality studies in the HBR area of Yangjiang, China, suggests it is unlikely that estimates of risk at low doses are substantially greater than currently believed." A 2011 meta-analysis further concluded that the "Total whole body radiation doses received over 70 years from the natural environment high background radiation areas in Kerala, India and Yanjiang, China are much smaller than [the non-tumour dose, "defined as the highest dose of radiation at which no statistically significant tumour increase was observed above the control level"] for the respective dose-rates in each district."

In 2011 an in vitro time-lapse study of the cellular response to low doses of radiation showed a strongly non-linear response of certain cellular repair mechanisms called radiation-induced foci (RIF). The study found that low doses of radiation prompted higher rates of RIF formation than high doses, and that after low-dose exposure RIF continued to form after the radiation had ended.

In 2012 a historical cohort study of >175 000 patients without previous cancer who were examined with CT head scans in UK between 1985 and 2002 was published. The study, which investigated leukaemia and brain cancer, indicated a linear dose response in the low dose region and had qualitative estimates of risk that were in agreement with the Life Span Study (Epidemiology data for low-linear energy transfer radiation). 

In 2013 a data linkage study of 11 million Australians with >680 000 people exposed to CT scans between 1985 and 2005 was published. The study confirmed the results of the 2012 UK study for leukaemia and brain cancer but also investigated other cancer types. The authors conclude that their results were generally consistent with the linear no threshold theory.

Controversy

The LNT model has been contested by a number of scientists. It is been claimed that the early proponent of the model Hermann Joseph Muller intentionally ignored an early study that did not support the LNT model when he gave his 1946 Nobel Prize address advocating the model.

It is also argued that LNT model had caused an irrational fear of radiation. In the wake of the 1986 Chernobyl accident in Ukraine, Europe-wide anxieties were fomented in pregnant mothers over the perception enforced by the LNT model that their children would be born with a higher rate of mutations. As far afield as the country of Denmark, hundreds of excess induced abortions were performed on the healthy unborn, out of this no-threshold fear. Following the accident however, studies of data sets approaching a million births in the EUROCAT database, divided into "exposed" and control groups were assessed in 1999. As no Chernobyl impacts were detected, the researchers conclude "in retrospect the widespread fear in the population about the possible effects of exposure on the unborn was not justified". Despite studies from Germany and Turkey, the only robust evidence of negative pregnancy outcomes that transpired after the accident were these elective abortion indirect effects, in Greece, Denmark, Italy etc., due to the anxieties created.

In very high dose radiation therapy, it was known at the time that radiation can cause a physiological increase in the rate of pregnancy anomalies, however, human exposure data and animal testing suggests that the "malformation of organs appears to be a deterministic effect with a threshold dose" below which, no rate increase is observed. A review in 1999 on the link between the Chernobyl accident and teratology (birth defects) concludes that "there is no substantive proof regarding radiation‐induced teratogenic effects from the Chernobyl accident". It is argued that the human body has defense mechanisms, such as DNA repair and programmed cell death, that would protect it against carcinogenesis due to low-dose exposures of carcinogens.

Ramsar, located in Iran, is often quoted as being a counter example to LNT. Based on preliminary results, it was considered as having the highest natural background radiation levels on Earth, several times higher than the ICRP-recommended radiation dose limits for radiation workers, whilst the local population did not seem to suffer any ill effects. However, the population of the high-radiation districts is small (about 1800 inhabitants) and only receive an average of 6 millisieverts per year, so that cancer epidemiology data are too imprecise to draw any conclusions. On the other hand, there may be non-cancer effects from the background radiation such as chromosomal aberrations or female infertility.

A 2011 research of the cellular repair mechanisms support the evidence against the linear no-threshold model. According to its authors, this study published in the Proceedings of the National Academy of Sciences of the United States of America "casts considerable doubt on the general assumption that risk to ionizing radiation is proportional to dose".

However, a 2011 review of studies addressing childhood leukaemia following exposure to ionizing radiation, including both diagnostic exposure and natural background exposure, concluded that existing risk factors, excess relative risk per Sv (ERR/Sv), is "broadly applicable" to low dose or low dose-rate exposure.

Several expert scientific panels have been convened on the accuracy of the LNT model at low dosage, and various organizations and bodies have stated their positions on this topic:
Support
  • In 2004 the United States National Research Council (part of the National Academy of Sciences) supported the linear no threshold model and stated regarding Radiation hormesis:
    The assumption that any stimulatory hormetic effects from low doses of ionizing radiation will have a significant health benefit to humans that exceeds potential detrimental effects from the radiation exposure is unwarranted at this time.
  • In 2005 the United States National Academies' National Research Council published its comprehensive meta-analysis of low-dose radiation research BEIR VII, Phase 2. In its press release the Academies stated:
The scientific research base shows that there is no threshold of exposure below which low levels of ionizing radiation can be demonstrated to be harmless or beneficial.
  • The National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress). endorsed the LNT model in a 2001 report that attempted to survey existing literature critical of the model.
  • The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) wrote in its 2000 report.
    Until the [...] uncertainties on low-dose response are resolved, the Committee believes that an increase in the risk of tumour induction proportionate to the radiation dose is consistent with developing knowledge and that it remains, accordingly, the most scientifically defensible approximation of low-dose response. However, a strictly linear dose response should not be expected in all circumstances.
  • the United States Environmental Protection Agency also endorses the LNT model in its 2011 report on radiogenic cancer risk:
    Underlying the risk models is a large body of epidemiological and radiobiological data. In general, results from both lines of research are consistent with a linear, no-threshold dose (LNT) response model in which the risk of inducing a cancer in an irradiated tissue by low doses of radiation is proportional to the dose to that tissue.
Oppose
A number of organisations disagree with using the Linear no-threshold model to estimate risk from environmental and occupational low-level radiation exposure:
  • The French Academy of Sciences (Académie des Sciences) and the National Academy of Medicine (Académie Nationale de Médecine) published a report in 2005 (at the same time as BEIR VII report in the United States) that rejected the Linear no-threshold model in favor of a threshold dose response and a significantly reduced risk at low radiation exposure:
In conclusion, this report raises doubts on the validity of using LNT for evaluating the carcinogenic risk of low doses (< 100 mSv) and even more for very low doses (< 10 mSv). The LNT concept can be a useful pragmatic tool for assessing rules in radioprotection for doses above 10 mSv; however since it is not based on biological concepts of our current knowledge, it should not be used without precaution for assessing by extrapolation the risks associated with low and even more so, with very low doses (< 10 mSv), especially for benefit-risk assessments imposed on radiologists by the European directive 97-43.
  • The Health Physics Society's position statement first adopted in January 1996, as revised in July 2010, states:
In accordance with current knowledge of radiation health risks, the Health Physics Society recommends against quantitative estimation of health risks below an individual dose of 5 rem (50 mSv) in one year or a lifetime dose of 10 rem (100 mSv) above that received from natural sources. Doses from natural background radiation in the United States average about 0.3 rem (3 mSv) per year. A dose of 5 rem (50 mSv) will be accumulated in the first 17 years of life and about 25 rem (250 mSv) in a lifetime of 80 years. Estimation of health risk associated with radiation doses that are of similar magnitude as those received from natural sources should be strictly qualitative and encompass a range of hypothetical health outcomes, including the possibility of no adverse health effects at such low levels.
  • The American Nuclear Society recommended further research on the Linear No Threshold Hypothesis before making adjustments to current radiation protection guidelines, concurring with the Health Physics Society's position that:
    There is substantial and convincing scientific evidence for health risks at high dose. Below 10 rem or 100 mSv (which includes occupational and environmental exposures) risks of health effects are either too small to be observed or are non-existent.
Intermediate
The US Nuclear Regulatory Commission takes the intermediate position that "accepts the LNT hypothesis as a conservative model for estimating radiation risk", but noting that "public health data do not absolutely establish the occurrence of cancer following exposure to low doses and dose rates — below about 10,000 mrem (100 mSv). Studies of occupational workers who are chronically exposed to low levels of radiation above normal background have shown no adverse biological effects."

Mental health effects

The consequences of low-level radiation are often more psychological than radiological. Because damage from very-low-level radiation cannot be detected, people exposed to it are left in anguished uncertainty about what will happen to them. Many believe they have been fundamentally contaminated for life and may refuse to have children for fear of birth defects. They may be shunned by others in their community who fear a sort of mysterious contagion.

Forced evacuation from a radiation or nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, even suicide. Such was the outcome of the 1986 Chernobyl nuclear disaster in the Ukraine. A comprehensive 2005 study concluded that "the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date". Frank N. von Hippel, a U.S. scientist, commented on the 2011 Fukushima nuclear disaster, saying that "fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas".

Such great psychological danger does not accompany other materials that put people at risk of cancer and other deadly illness. Visceral fear is not widely aroused by, for example, the daily emissions from coal burning, although, as a National Academy of Sciences study found, this causes 10,000 premature deaths a year in the US. It is "only nuclear radiation that bears a huge psychological burden — for it carries a unique historical legacy".

Radiology

From Wikipedia, the free encyclopedia

A radiologist interpreting magnetic resonance imaging.

Radiology is the medical specialty that uses medical imaging to diagnose and treat diseases within the bodies of both humans and animals. 

A variety of imaging techniques such as X-ray radiography, ultrasound, computed tomography (CT), nuclear medicine including positron emission tomography (PET), and magnetic resonance imaging (MRI) are used to diagnose or treat diseases. Interventional radiology is the performance of usually minimally invasive medical procedures with the guidance of imaging technologies such as those mentioned above. 

The modern practice of radiology involves several different healthcare professions working as a team. The radiologist is a medical doctor who has completed the appropriate post-graduate training and interprets medical images, communicates these findings to other physicians by means of a report or verbally, and uses imaging to perform minimally invasive medical procedures. The nurse is involved in the care of patients before and after imaging or procedures, including administration of medications, monitoring of vital signs and monitoring of sedated patients. The radiographer, also known as a "radiologic technologist" in some countries such as the United States, is a specially trained healthcare professional that uses sophisticated technology and positioning techniques to produce medical images for the radiologist to interpret. Depending on the individual's training and country of practice, the radiographer may specialize in one of the above-mentioned imaging modalities or have expanded roles in image reporting.

Diagnostic imaging modalities

Projection (plain) radiography

Radiography of the knee using a DR machine.
 
 
Radiographs (originally called roentgenographs, named after the discoverer of X-rays, Wilhelm Conrad Röntgen) are produced by transmitting X-rays through a patient. The X-rays are projected through the body onto a detector; an image is formed based on which rays pass through (and are detected) versus those that are absorbed or scattered in the patient (and thus are not detected). Röntgen discovered X-rays on November 8, 1895 and received the first Nobel Prize in Physics for their discovery in 1901.

In film-screen radiography, an X-ray tube generates a beam of X-rays, which is aimed at the patient. The X-rays that pass through the patient are filtered through a device called an grid or X-ray filter, to reduce scatter, and strike an undeveloped film, which is held tightly to a screen of light-emitting phosphors in a light-tight cassette. The film is then developed chemically and an image appears on the film. Film-screen radiography is being replaced by phosphor plate radiography but more recently by digital radiography (DR) and the EOS imaging. In the two latest systems, the X-rays strike sensors that converts the signals generated into digital information, which is transmitted and converted into an image displayed on a computer screen. In digital radiography the sensors shape a plate, but in the EOS system, which is a slot-scanning system, a linear sensor vertically scans the patient. 

Plain radiography was the only imaging modality available during the first 50 years of radiology. Due to its availability, speed, and lower costs compared to other modalities, radiography is often the first-line test of choice in radiologic diagnosis. Also despite the large amount of data in CT scans, MR scans and other digital-based imaging, there are many disease entities in which the classic diagnosis is obtained by plain radiographs. Examples include various types of arthritis and pneumonia, bone tumors (especially benign bone tumors), fractures, congenital skeletal anomalies, etc. 

Mammography and DXA are two applications of low energy projectional radiography, used for the evaluation for breast cancer and osteoporosis, respectively.

Fluoroscopy

Fluoroscopy and angiography are special applications of X-ray imaging, in which a fluorescent screen and image intensifier tube is connected to a closed-circuit television system. This allows real-time imaging of structures in motion or augmented with a radiocontrast agent. Radiocontrast agents are usually administered by swallowing or injecting into the body of the patient to delineate anatomy and functioning of the blood vessels, the genitourinary system, or the gastrointestinal tract (GI tract). Two radiocontrast agents are presently in common use. Barium sulfate (BaSO4) is given orally or rectally for evaluation of the GI tract. Iodine, in multiple proprietary forms, is given by oral, rectal, vaginal, intra-arterial or intravenous routes. These radiocontrast agents strongly absorb or scatter X-rays, and in conjunction with the real-time imaging, allow demonstration of dynamic processes, such as peristalsis in the digestive tract or blood flow in arteries and veins. Iodine contrast may also be concentrated in abnormal areas more or less than in normal tissues and make abnormalities (tumors, cysts, inflammation) more conspicuous. Additionally, in specific circumstances, air can be used as a contrast agent for the gastrointestinal system and carbon dioxide can be used as a contrast agent in the venous system; in these cases, the contrast agent attenuates the X-ray radiation less than the surrounding tissues.

Computed tomography

Image from a CT scan of the brain
 
CT imaging uses X-rays in conjunction with computing algorithms to image the body. In CT, an X-ray tube opposite an X-ray detector (or detectors) in a ring-shaped apparatus rotate around a patient, producing a computer-generated cross-sectional image (tomogram). CT is acquired in the axial plane, with coronal and sagittal images produced by computer reconstruction. Radiocontrast agents are often used with CT for enhanced delineation of anatomy. Although radiographs provide higher spatial resolution, CT can detect more subtle variations in attenuation of X-rays (higher contrast resolution). CT exposes the patient to significantly more ionizing radiation than a radiograph.
Spiral multidetector CT uses 16, 64, 254 or more detectors during continuous motion of the patient through the radiation beam to obtain fine detail images in a short exam time. With rapid administration of intravenous contrast during the CT scan, these fine detail images can be reconstructed into three-dimensional (3D) images of carotid, cerebral, coronary or other arteries.

The introduction of computed tomography in the early 1970s revolutionized diagnostic radiology by providing Clinicians with images of real three-dimensional anatomic structures. CT scanning has become the test of choice in diagnosing some urgent and emergent conditions, such as cerebral hemorrhage, pulmonary embolism (clots in the arteries of the lungs), aortic dissection (tearing of the aortic wall), appendicitis, diverticulitis, and obstructing kidney stones. Continuing improvements in CT technology, including faster scanning times and improved resolution, have dramatically increased the accuracy and usefulness of CT scanning, which may partially account for increased use in medical diagnosis.

Ultrasound

Medical ultrasonography uses ultrasound (high-frequency sound waves) to visualize soft tissue structures in the body in real time. No ionizing radiation is involved, but the quality of the images obtained using ultrasound is highly dependent on the skill of the person (ultrasonographer) performing the exam and the patient's body size. Examinations of larger, overweight patients may have a decrease in image quality as their subcutaneous fat absorbs more of the sound waves. This results in fewer sound waves penetrating to organs and reflecting back to the transducer, resulting in loss of information and a poorer quality image. Ultrasound is also limited by its inability to image through air pockets (lungs, bowel loops) or bone. Its use in medical imaging has developed mostly within the last 30 years. The first ultrasound images were static and two-dimensional (2D), but with modern ultrasonography, 3D reconstructions can be observed in real time, effectively becoming "4D".
Because ultrasound imaging techniques do not employ ionizing radiation to generate images (unlike radiography, and CT scans), they are generally considered safer and are therefore more common in obstetrical imaging. The progression of pregnancies can be thoroughly evaluated with less concern about damage from the techniques employed, allowing early detection and diagnosis of many fetal anomalies. Growth can be assessed over time, important in patients with chronic disease or pregnancy-induced disease, and in multiple pregnancies (twins, triplets, etc.). Color-flow Doppler ultrasound measures the severity of peripheral vascular disease and is used by cardiologists for dynamic evaluation of the heart, heart valves and major vessels. Stenosis, for example, of the carotid arteries may be a warning sign for an impending stroke. A clot, embedded deep in one of the inner veins of the legs, can be found via ultrasound before it dislodges and travels to the lungs, resulting in a potentially fatal pulmonary embolism. Ultrasounds is useful as a guide to performing biopsies to minimise damage to surrounding tissues and in drainages such as thoracentesis. Small, portable ultrasound devices now replace peritoneal lavage in trauma wards by non-invasively assessing for the presence of internal bleeding and any internal organ damage. Extensive internal bleeding or injury to the major organs may require surgery and repair.

Magnetic resonance imaging

MRI of the knee.
 
MRI uses strong magnetic fields to align atomic nuclei (usually hydrogen protons) within body tissues, then uses a radio signal to disturb the axis of rotation of these nuclei and observes the radio frequency signal generated as the nuclei return to their baseline states. The radio signals are collected by small antennae, called coils, placed near the area of interest. An advantage of MRI is its ability to produce images in axial, coronal, sagittal and multiple oblique planes with equal ease. MRI scans give the best soft tissue contrast of all the imaging modalities. With advances in scanning speed and spatial resolution, and improvements in computer 3D algorithms and hardware, MRI has become an important tool in musculoskeletal radiology and neuroradiology.

One disadvantage is the patient has to hold still for long periods of time in a noisy, cramped space while the imaging is performed. Claustrophobia (fear of closed spaces) severe enough to terminate the MRI exam is reported in up to 5% of patients. Recent improvements in magnet design including stronger magnetic fields (3 teslas), shortening exam times, wider, shorter magnet bores and more open magnet designs, have brought some relief for claustrophobic patients. However, for magnets with equivalent field strengths, there is often a trade-off between image quality and open design. MRI has great benefit in imaging the brain, spine, and musculoskeletal system. The use of MRI is currently contraindicated for patients with pacemakers, cochlear implants, some indwelling medication pumps, certain types of cerebral aneurysm clips, metal fragments in the eyes and some metallic hardware due to the powerful magnetic fields and strong fluctuating radio signals to which the body is exposed. Areas of potential advancement include functional imaging, cardiovascular MRI, and MRI-guided therapy.

Nuclear medicine

Nuclear medicine imaging involves the administration into the patient of radiopharmaceuticals consisting of substances with affinity for certain body tissues labeled with radioactive tracer. The most commonly used tracers are technetium-99m, iodine-123, iodine-131, gallium-67, indium-111, thallium-201 and fludeoxyglucose (18F) (18F-FDG). The heart, lungs, thyroid, liver, brain, gallbladder, and bones are commonly evaluated for particular conditions using these techniques. While anatomical detail is limited in these studies, nuclear medicine is useful in displaying physiological function. The excretory function of the kidneys, iodine-concentrating ability of the thyroid, blood flow to heart muscle, etc. can be measured. The principal imaging devices are the gamma camera and the PET Scanner, which detect the radiation emitted by the tracer in the body and display it as an image. With computer processing, the information can be displayed as axial, coronal and sagittal images (single-photon emission computed tomography - SPECT or Positron-emission tomography - PET). In the most modern devices, nuclear medicine images can be fused with a CT scan taken quasisimultaneously, so the physiological information can be overlaid or coregistered with the anatomical structures to improve diagnostic accuracy. 

Positron emission tomography (PET) scanning deals with positrons instead of gamma rays detected by gamma cameras. The positrons annihilate to produce two opposite traveling gamma rays to be detected coincidentally, thus improving resolution. In PET scanning, a radioactive, biologically active substance, most often 18F-FDG, is injected into a patient and the radiation emitted by the patient is detected to produce multiplanar images of the body. Metabolically more active tissues, such as cancer, concentrate the active substance more than normal tissues. PET images can be combined (or "fused") with anatomic (CT) imaging, to more accurately localize PET findings and thereby improve diagnostic accuracy. 

The fusion technology has gone further to combine PET and MRI similar to PET and CT. PET/MRI fusion, largely practiced in academic and research settings, could potentially play a crucial role in fine detail of brain imaging, breast cancer screening, and small joint imaging of the foot. The technology recently blossomed after passing the technical hurdle of altered positron movement in strong magnetic field thus affecting the resolution of PET images and attenuation correction.

Interventional radiology

Interventional radiology (IR or sometimes VIR for vascular and interventional radiology) is a subspecialty of radiology in which minimally invasive procedures are performed using image guidance. Some of these procedures are done for purely diagnostic purposes (e.g., angiogram), while others are done for treatment purposes (e.g., angioplasty).

The basic concept behind interventional radiology is to diagnose or treat pathologies, with the most minimally invasive technique possible. Minimally invasive procedures are currently performed more than ever before. These procedures are often performed with the patient fully awake, with little or no sedation required. Interventional Radiologists and Interventional Radiographers diagnose and treat several disorders, including peripheral vascular disease, renal artery stenosis, inferior vena cava filter placement, gastrostomy tube placements, biliary stents and hepatic interventions. Images are used for guidance, and the primary instruments used during the procedure are needles and catheters. The images provide maps that allow the clinician to guide these instruments through the body to the areas containing disease. By minimizing the physical trauma to the patient, peripheral interventions can reduce infection rates and recovery times, as well as hospital stays. To be a trained interventionalist in the United States, an individual completes a five-year residency in radiology and a one- or two-year fellowship in IR.

Analysis of images

A radiologist interprets medical images on a modern picture archiving and communication system (PACS) workstation. San Diego, CA, 2010.

Teleradiology

Teleradiology is the transmission of radiographic images from one location to another for interpretation by an appropriately trained professional, usually a Radiologist or Reporting Radiographer. It is most often used to allow rapid interpretation of emergency room, ICU and other emergent examinations after hours of usual operation, at night and on weekends. In these cases, the images can be sent across time zones (e.g. to Spain, Australia, India) with the receiving Clinician working his normal daylight hours. However at present, large private teleradiology companies in the U.S. currently provide most after-hours coverage employing night working Radiologists in the U.S. Teleradiology can also be used to obtain consultation with an expert or subspecialist about a complicated or puzzling case. In the U.S., many hospitals outsource their radiology departments to radiologists in India due to the lowered cost and availability of high speed internet access. 

Teleradiology requires a sending station, a high-speed internet connection, and a high-quality receiving station. At the transmission station, plain radiographs are passed through a digitizing machine before transmission, while CT, MRI, ultrasound and nuclear medicine scans can be sent directly, as they are already digital data. The computer at the receiving end will need to have a high-quality display screen that has been tested and cleared for clinical purposes. Reports are then transmitted to the requesting clinician. 

The major advantage of teleradiology is the ability to use different time zones to provide real-time emergency radiology services around-the-clock. The disadvantages include higher costs, limited contact between the referrer and the reporting Clinician, and the inability to cover for procedures requiring an onsite reporting Clinician. Laws and regulations concerning the use of teleradiology vary among the states, with some requiring a license to practice medicine in the state sending the radiologic exam. In the U.S., some states require the teleradiology report to be preliminary with the official report issued by a hospital staff Radiologist. Lastly, the major benefit of teleradiology is that it can be automated with modern machine learning techniques.

X-ray of a hand with calculation of bone age analysis

Professional training

United States

Radiology is a field in medicine that has expanded rapidly after 2000 due to advances in computer technology, which is closely linked to modern imaging techniques. Applying for residency positions in radiology is relatively competitive. Applicants are often near the top of their medical school classes, with high USMLE (board) examination scores. Diagnostic radiologists must complete prerequisite undergraduate education, four years of medical school to earn a medical degree (D.O. or M.D.), one year of internship, and four years of residency training. After residency, radiologists may pursue one or two years of additional specialty fellowship training.

The American Board of Radiology (ABR) administers professional certification in Diagnostic Radiology, Radiation Oncology and Medical Physics as well as subspecialty certification in neuroradiology, nuclear radiology, pediatric radiology and vascular and interventional radiology. "Board Certification" in diagnostic radiology requires successful completion of two examinations. The Core Exam is given after 36 months of residency. This computer-based examination is given twice a year in Chicago and Tucson. It encompasses 18 categories. A pass of all 18 is a pass. A fail on 1 to 5 categories is a Conditioned exam and the resident will need to retake and pass the failed categories. A fail on over 5 categories is a failed exam. The Certification Exam, can be taken 15 months after completion of the Radiology residency. This computer-based examination consists of 5 modules and graded pass-fail. It is given twice a year in Chicago and Tucson. Recertification examinations are taken every 10 years, with additional required continuing medical education as outlined in the Maintenance of Certification document.

Certification may also be obtained from the American Osteopathic Board of Radiology (AOBR) and the American Board of Physician Specialties. 

Following completion of residency training, Radiologists may either begin practicing as a general Diagnostic Radiologist or enter into subspecialty training programs known as fellowships. Examples of subspeciality training in radiology include abdominal imaging, thoracic imaging, cross-sectional/ultrasound, MRI, musculoskeletal imaging, interventional radiology, neuroradiology, interventional neuroradiology, paediatric radiology, nuclear medicine, emergency radiology, breast imaging and women's imaging. Fellowship training programs in radiology are usually one or two years in length.

Some medical schools in the US have started to incorporate a basic radiology introduction into their core MD training. New York Medical College, the Wayne State University School of Medicine, Weill Cornell Medicine, the Uniformed Services University, and the University of South Carolina School of Medicine offer an introduction to radiology during their respective MD programs. Campbell University School of Osteopathic Medicine also integrates imaging material into their curriculum early in the first year. 

Radiographic exams are usually performed by Radiographers. Qualifications for Radiographers vary by country, but many Radiographers now are required to hold a degree.

Veterinary Radiologists are veterinarians who specialize in the use of X-rays, ultrasound, MRI and nuclear medicine for diagnostic imaging or treatment of disease in animals. They are certified in either diagnostic radiology or radiation oncology by the American College of Veterinary Radiology.

United Kingdom

Radiology is an extremely competitive speciality in the UK, attracting applicants from a broad range of backgrounds. Applicants are welcomed directly from the foundation programme, as well as those who have completed higher training. Recruitment and selection into training post in clinical radiology posts in England, Scotland and Wales is done by an annual nationally coordinated process lasting from November to March. In this process, all applicants are required to pass a Specialty Recruitment Assessment (SRA) test. Those with a test score above a certain threshold are offered a single interview at the London and the South East Recruitment Office. At a later stage, applicants declare what programs they prefer, but may in some cases be placed in a neighbouring region.

The training programme lasts for a total of five years. During this time, doctors rotate into different subspecialities, such as paediatrics, musculoskeletal or neuroradiology, and breast imaging. During the first year of training, radiology trainees are expected to pass the first part of the Fellowship of the Royal College of Radiologists (FRCR) exam. This comprises a medical physics and anatomy examination. Following completion of their part 1 exam, they are then required to pass six written exams (part 2A), which cover all the subspecialities. Successful completion of these allows them to complete the FRCR by completing part 2B, which includes rapid reporting, and a long case discussion.

After achieving a certificate of completion of training (CCT), many fellowship posts exist in specialities such as neurointervention and vascular intervention, which would allow the Doctor to work as an Interventional Radiologist. In some cases, the CCT date can be deferred by a year to include these fellowship programmes. 

UK radiology registrars are represented by the Society of Radiologists in Training (SRT), which was founded in 1993 under the auspices of the Royal College of Radiologists. The society is a nonprofit organisation, run by radiology registrars specifically to promote radiology training and education in the UK. Annual meetings are held by which trainees across the country are encouraged to attend.

Currently, a shortage of radiologists in the UK has created opportunities in all specialities, and with the increased reliance on imaging, demand is expected to increase in the future. Radiographers, and less frequently Nurses, are often trained to undertake many of these opportunities in order to help meet demand. Radiographers often may control a "list" of a particular set of procedures after being approved locally and signed off by a Consultant Radiologist. Similarly, Radiographers may simply operate a list for a Radiologist or other Physician on their behalf. Most often if a Radiographer operates a list autonomously then they are acting as the Operator and Practitioner under the Ionising Radiation (Medical Exposures) Regulations 2000. Radiographers are represented by a variety of bodies, most often this is the Society and College of Radiographers. Collaboration with Nurses is also common, where a list may be jointly organised between the Nurse and Radiographer.

Germany

After obtaining medical licensure, German Radiologists complete a five-year residency, culminating with a board examination (known as Facharztprüfung).

Italy

The radiology training program in Italy increased from four to five years in 2008. Further training is required for specialization in radiotherapy or nuclear medicine.

The Netherlands

Dutch radiologists complete a five-year residency program after completing the 6-year MD program.

India

The radiology training course is a post graduate 3-year program (MD/DNB Radiology) or a 2-year diploma (DMRD).

Singapore

Radiologists in Singapore complete a five-year undergraduate medicine degree followed by a one-year Internship (medical) and then a five-year residency program. Some Radiologists may elect to complete a one or two-year fellowship for further sub-specialization in fields such as interventional radiology.

Wednesday, October 30, 2019

Social vulnerability

From Wikipedia, the free encyclopedia

In its broadest sense, social vulnerability is one dimension of vulnerability to multiple stressors and shocks, including abuse, social exclusion and natural hazards. Social vulnerability refers to the inability of people, organizations, and societies to withstand adverse impacts from multiple stressors to which they are exposed. These impacts are due in part to characteristics inherent in social interactions, institutions, and systems of cultural values.

Because it is most apparent when calamity occurs, many studies of social vulnerability are found in risk management literature.

Definitions

"Vulnerability" derives from the Latin word vulnerare (to be wounded) and describes the potential to be harmed physically and/or psychologically. Vulnerability is often understood as the counterpart of resilience, and is increasingly studied in linked social-ecological systems. The Yogyakarta Principles, one of the international human rights instruments use the term "vulnerability" as such potential to abuse or social exclusion.

The concept of social vulnerability emerged most recently within the discourse on natural hazards and disasters. To date no one definition has been agreed upon. Similarly, multiple theories of social vulnerability exist. Most work conducted so far focuses on empirical observation and conceptual models. Thus, current social vulnerability research is a middle range theory and represents an attempt to understand the social conditions that transform a natural hazard (e.g. flood, earthquake, mass movements etc.) into a social disaster. The concept emphasizes two central themes:
  1. Both the causes and the phenomenon of disasters are defined by social processes and structures. Thus it is not only a geo- or biophysical hazard, but rather the social context that is taken into account to understand “natural” disasters (Hewitt 1983).
  2. Although different groups of a society may share a similar exposure to a natural hazard, the hazard has varying consequences for these groups, since they have diverging capacities and abilities to handle the impact of a hazard.
Taking a structuralist view, Hewitt (1997, p143) defines vulnerability as being:
...essentially about the human ecology of endangerment...and is embedded in the social geography of settlements and lands uses, and the space of distribution of influence in communities and political organisation.
this is in contrast to the more socially focused view of Blaikie et al. (1994, p9) who define vulnerability as the:
...set of characteristics of a group or individual in terms of their capacity to anticipate, cope with, resist and recover from the impact of a natural hazard. It involves a combination of factors that determine the degree to which someone's life and livelihood is at risk by a discrete and identifiable event in nature or society.

History of the concept

In the 1970s the concept of vulnerability was introduced within the discourse on natural hazards and disaster by O´Keefe, Westgate and Wisner (O´Keefe, Westgate et al. 1976). In “taking the naturalness out of natural disasters” these authors insisted that socio-economic conditions are the causes for natural disasters. The work illustrated by means of empirical data that the occurrence of disasters increased over the last 50 years, paralleled by an increasing loss of life. The work also showed that the greatest losses of life concentrate in underdeveloped countries, where the authors concluded that vulnerability is increasing.

Chambers put these empirical findings on a conceptual level and argued that vulnerability has an external and internal side: People are exposed to specific natural and social risk. At the same time people possess different capacities to deal with their exposure by means of various strategies of action (Chambers 1989). This argument was again refined by Blaikie, Cannon, Davis and Wisner, who went on to develop the Pressure and Release Model (PAR) (see below). Watts and Bohle argued similarly by formalizing the “social space of vulnerability”, which is constituted by exposure, capacity and potentiality (Watts and Bohle 1993).

Susan Cutter developed an integrative approach (hazard of place), which tries to consider both multiple geo- and biophysical hazards on the one hand as well as social vulnerabilities on the other hand (Cutter, Mitchell et al. 2000). Recently, Oliver-Smith grasped the nature-culture dichotomy by focusing both on the cultural construction of the people-environment-relationship and on the material production of conditions that define the social vulnerability of people (Oliver-Smith and Hoffman 2002).

Research on social vulnerability to date has stemmed from a variety of fields in the natural and social sciences. Each field has defined the concept differently, manifest in a host of definitions and approaches (Blaikie, Cannon et al. 1994; Henninger 1998; Frankenberger, Drinkwater et al. 2000; Alwang, Siegel et al. 2001; Oliver-Smith 2003; Cannon, Twigg et al. 2005). Yet some common threads run through most of the available work.

Within society

Although considerable research attention has examined components of biophysical vulnerability and the vulnerability of the built environment (Mileti, 1999), we currently know the least about the social aspects of vulnerability (Cutter et al., 2003). Socially created vulnerabilities are largely ignored, mainly due to the difficulty in quantifying them. Social vulnerability is created through the interaction of social forces and multiple stressors, and resolved through social (as opposed to individual) means. While individuals within a socially vulnerable context may break through the “vicious cycle,” social vulnerability itself can persist because of structural—i.e. social and political—influences that reinforce vulnerability.

Social vulnerability is partially the product of social inequalities—those social factors that influence or shape the susceptibility of various groups to harm and that also govern their ability to respond (Cutter et al., 2003). It is, however, important to note that social vulnerability is not registered by exposure to hazards alone, but also resides in the sensitivity and resilience of the system to prepare, cope and recover from such hazards (Turner et al., 2003). However, it is also important to note, that a focus limited to the stresses associated with a particular vulnerability analysis is also insufficient for understanding the impact on and responses of the affected system or its components (Mileti, 1999; Kaperson et al., 2003; White & Haas, 1974). These issues are often underlined in attempts to model the concept (see Models of Social Vulnerability).

Models

Risk-Hazard (RH) model (diagram after Turner et al., 2003), showing the impact of a hazard as a function of exposure and sensitivity. The chain sequence begins with the hazard, and the concept of vulnerability is noted implicitly as represented by white arrows.
 
Two of the principal archetypal reduced-form models of social vulnerability are presented, that have informed vulnerability analysis: the Risk-Hazard (RH) model and the Pressure and Release model.

Risk-Hazard (RH) Model

Initial RH models sought to understand the impact of a hazard as a function of exposure to the hazardous event and the sensitivity of the entity exposed (Turner et al., 2003). Applications of this model in environmental and climate impact assessments generally emphasised exposure and sensitivity to perturbations and stressors (Kates, 1985; Burton et al., 1978) and worked from the hazard to the impacts (Turner et al., 2003). However, several inadequacies became apparent. Principally, it does not treat the ways in which the systems in question amplify or attenuate the impacts of the hazard (Martine & Guzman, 2002). Neither does the model address the distinction among exposed subsystems and components that lead to significant variations in the consequences of the hazards, or the role of political economy in shaping differential exposure and consequences (Blaikie et al., 1994, Hewitt, 1997). This led to the development of the PAR model.

Pressure and Release (PAR) Model

Pressure and Release (PAR) model after Blaikie et al. (1994) showing the progression of vulnerability. The diagram shows a disaster as the intersection between socio-economic pressures on the left and physical exposures (natural hazards) on the right
The PAR model understands a disaster as the intersection between socio-economic pressure and physical exposure. Risk is explicitly defined as a function of the perturbation, stressor, or stress and the vulnerability of the exposed unit (Blaikie et al, 1994). In this way, it directs attention to the conditions that make exposure unsafe, leading to vulnerability and to the causes creating these conditions. Used primarily to address social groups facing disaster events, the model emphasises distinctions in vulnerability by different exposure units such as social class and ethnicity. The model distinguishes between three components on the social side: root causes, dynamic pressures and unsafe conditions, and one component on the natural side, the natural hazards itself. Principal root causes include “economic, demographic and political processes”, which affect the allocation and distribution of resources between different groups of people. Dynamic Pressures translate economic and political processes in local circumstances (e.g. migration patterns). Unsafe conditions are the specific forms in which vulnerability is expressed in time and space, such as those induced by the physical environment, local economy or social relations (Blaikie, Cannon et al. 1994).
Although explicitly highlighting vulnerability, the PAR model appears insufficiently comprehensive for the broader concerns of sustainability science (Turner et al., 2003). Primarily, it does not address the coupled human environment system in the sense of considering the vulnerability of biophysical subsystems (Kasperson et al, 2003) and it provides little detail on the structure of the hazard's causal sequence. The model also tends to underplay feedback beyond the system of analysis that the integrative RH models included (Kates, 1985).

Criticism

Some authors criticise the conceptualisation of social vulnerability for overemphasising the social, political and economical processes and structures that lead to vulnerable conditions. Inherent in such a view is the tendency to understand people as passive victims (Hewitt 1997) and to neglect the subjective and intersubjective interpretation and perception of disastrous events. Bankoff criticises the very basis of the concept, since in his view it is shaped by a knowledge system that was developed and formed within the academic environment of western countries and therefore inevitably represents values and principles of that culture. According to Bankoff the ultimate aim underlying this concept is to depict large parts of the world as dangerous and hostile to provide further justification for interference and intervention (Bankoff 2003).

Current and future research

Social vulnerability research has become a deeply interdisciplinary science, rooted in the modern realization that humans are the causal agents of disasters – i.e., disasters are never natural, but a consequence of human behavior. The desire to understand geographic, historic, and socio-economic characteristics of social vulnerability motivates much of the research being conducted around the world today. 

Two principal goals are currently driving the field of social vulnerability research:
  1. The design of models which explain vulnerability and the root causes which create it, and
  2. The development of indicators and indexes which attempt to map vulnerability over time and space (Villágran de León 2006).
The temporal and spatial aspects of vulnerability science are pervasive, particularly in research that attempts to demonstrate the impact of development on social vulnerability. Geographic Information Systems (GIS) are increasingly being used to map vulnerability, and to better understand how various phenomena (hydrological, meteorological, geophysical, social, political and economic) effect human populations.

Researchers have yet to develop reliable models capable of predicting future outcomes based upon existing theories and data. Designing and testing the validity of such models, particularly at the sub-national scale at which vulnerability reduction takes place, is expected to become a major component of social vulnerability research in the future.

An even greater aspiration in social vulnerability research is the search for one, broadly applicable theory, which can be applied systematically at a variety of scales, all over the world. Climate change scientists, building engineers, public health specialists, and many other related professions have already achieved major strides in reaching common approaches. Some social vulnerability scientists argue that it is time for them to do the same, and they are creating a variety of new forums in order to seek a consensus on common frameworks, standards, tools, and research priorities. Many academic, policy, and public/NGO organizations promote a globally applicable approach in social vulnerability science and policy (see section 5 for links to some of these institutions).

Disasters often expose pre-existing societal inequalities that lead to disproportionate loss of property, injury, and death (Wisner, Blaikie, Cannon, & Davis, 2004). Some disaster researchers argue that particular groups of people are placed disproportionately at-risk to hazards. Minorities, immigrants, women, children, the poor, as well as people with disabilities are among those have been identified as particularly vulnerable to the impacts of disaster (Cutter et al., 2003; Peek, 2008; Stough, Sharp, Decker & Wilker, 2010). 

Since 2005, the Spanish Red Cross has developed a set of indicators to measure the multi-dimensional aspects of social vulnerability. These indicators are generated through the statistical analysis of more than 500 thousand people who are suffering of economic strain and social vulnerability, and who have a personal record containing 220 variables at the Red Cross database. An Index on Social Vulnerability in Spain is produced annually, both for adults and for children.

Collective vulnerability

Collective vulnerability is a state in which the integrity and social fabric of a community is or was threatened through traumatic events or repeated collective violence. In addition, according to the collective vulnerability hypothesis, shared experience of vulnerability and the loss of shared normative references can lead to collective reactions aimed to reestablish the lost norms and trigger forms of collective resilience.

This theory has been developed by social psychologists to study the support for human rights. It is rooted in the consideration that devastating collective events are sometimes followed by claims for measures that may prevent that similar event will happen again. For instance, the Universal Declaration of Human Rights was a direct consequence of World War II horrors. Psychological research by Willem Doise and colleagues shows indeed that after people have experienced a collective injustice, they are more likely to support the reinforcement of human rights. Populations who collectively endured systematic human rights violations are more critical of national authorities and less tolerant of rights violations. Some analyses performed by Dario Spini, Guy Elcheroth and Rachel Fasel on the Red Cross' “People on War” survey shows that when individuals have direct experience with the armed conflict are less keen to support humanitarian norms. However, in countries in which most of the social groups in conflict share a similar level of victimization, people express more the need for reestablishing protective social norms as the human rights, no matter the magnitude of the conflict.

Research opportunities and challenges

Research on social vulnerability is expanding rapidly to fill the research and action gaps in this field. This work can be characterized in three major groupings, including research, public awareness, and policy. The following issues have been identified as requiring further attention to understand and reduce social vulnerability (Warner and Loster 2006):
Research
1. Foster a common understanding of social vulnerability – its definition(s), theories, and measurement approaches.
2. Aim for science that produces tangible and applied outcomes.
3. Advance tools and methodologies to reliably measure social vulnerability.
Public awareness
4. Strive for better understanding of nonlinear relationships and interacting systems (environment, social and economic, hazards), and present this understanding coherently to maximize public understanding.
5. Disseminate and present results in a coherent manner for the use of lay audiences. Develop straight forward information and practical education tools.
6. Recognize the potential of the media as a bridging device between science and society.
Policy
7. Involve local communities and stakeholders considered in vulnerability studies.
8. Strengthen people's ability to help themselves, including an (audible) voice in resource allocation decisions.
9. Create partnerships that allow stakeholders from local, national, and international levels to contribute their knowledge.
10. Generate individual and local trust and ownership of vulnerability reduction efforts.
Debate and ongoing discussion surround the causes and possible solutions to social vulnerability. In cooperation with scientists and policy experts worldwide, momentum is gathering around practice-oriented research on social vulnerability. In the future, links will be strengthened between ongoing policy and academic work to solidify the science, consolidate the research agenda, and fill knowledge gaps about causes of and solutions for social vulnerability.

Algorithmic information theory

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Algorithmic_information_theory ...