Search This Blog

Monday, May 1, 2017

Malaria

From Wikipedia, the free encyclopedia
 
Malaria
Malaria.jpg
A Plasmodium from the saliva of a female mosquito moving across a mosquito cell
Specialty Infectious disease
Symptoms Fever, vomiting, headache[1]
Usual onset 10–15 days post exposure[2]
Causes Plasmodium spread by mosquitos[1]
Diagnostic method Examination of the blood, antigen detection tests[1]
Prevention Mosquito nets, insect repellent, mosquito control, medications[1]
Medication Antimalarial medication[2]
Frequency 296 million (2015)[3]
Deaths 730,500 (2015)[4]

Malaria is a mosquito-borne infectious disease affecting humans and other animals caused by parasitic protozoans (a group of single-celled microorganisms) belonging to the Plasmodium type.[2] Malaria causes symptoms that typically include fever, feeling tired, vomiting, and headaches. In severe cases it can cause yellow skin, seizures, coma, or death.[1] Symptoms usually begin ten to fifteen days after being bitten. If not properly treated, people may have recurrences of the disease months later.[2] In those who have recently survived an infection, reinfection usually causes milder symptoms. This partial resistance disappears over months to years if the person has no continuing exposure to malaria.[1]

The disease is most commonly transmitted by an infected female Anopheles mosquito. The mosquito bite introduces the parasites from the mosquito's saliva into a person's blood.[2] The parasites travel to the liver where they mature and reproduce. Five species of Plasmodium can infect and be spread by humans.[1] Most deaths are caused by P. falciparum because P. vivax, P. ovale, and P. malariae generally cause a milder form of malaria.[2][1] The species P. knowlesi rarely causes disease in humans.[2] Malaria is typically diagnosed by the microscopic examination of blood using blood films, or with antigen-based rapid diagnostic tests.[1] Methods that use the polymerase chain reaction to detect the parasite's DNA have been developed, but are not widely used in areas where malaria is common due to their cost and complexity.[5]

The risk of disease can be reduced by preventing mosquito bites through the use of mosquito nets and insect repellents, or with mosquito control measures such as spraying insecticides and draining standing water.[1] Several medications are available to prevent malaria in travellers to areas where the disease is common. Occasional doses of the combination medication sulfadoxine/pyrimethamine are recommended in infants and after the first trimester of pregnancy in areas with high rates of malaria. Despite a need, no effective vaccine exists, although efforts to develop one are ongoing.[2] The recommended treatment for malaria is a combination of antimalarial medications that includes an artemisinin.[2][1] The second medication may be either mefloquine, lumefantrine, or sulfadoxine/pyrimethamine.[6] Quinine along with doxycycline may be used if an artemisinin is not available.[6] It is recommended that in areas where the disease is common, malaria is confirmed if possible before treatment is started due to concerns of increasing drug resistance. Resistance among the parasites has developed to several antimalarial medications; for example, chloroquine-resistant P. falciparum has spread to most malarial areas, and resistance to artemisinin has become a problem in some parts of Southeast Asia.[2]

The disease is widespread in the tropical and subtropical regions that exist in a broad band around the equator.[1] This includes much of Sub-Saharan Africa, Asia, and Latin America.[2] In 2015, there were 296 million cases of malaria worldwide resulting in an estimated 731,000 deaths.[3][4] Approximately 90% of both cases and deaths occurred in Africa.[7] Rates of disease have decreased from 2000 to 2015 by 37%,[7] but increased from 2014 during which there were 198 million cases.[8] Malaria is commonly associated with poverty and has a major negative effect on economic development.[9][10] In Africa, it is estimated to result in losses of US$12 billion a year due to increased healthcare costs, lost ability to work, and negative effects on tourism.[11]

Signs and symptoms

Main symptoms of malaria[12]

The signs and symptoms of malaria typically begin 8–25 days following infection;[12] however, symptoms may occur later in those who have taken antimalarial medications as prevention.[5] Initial manifestations of the disease—common to all malaria species—are similar to flu-like symptoms,[13] and can resemble other conditions such as sepsis, gastroenteritis, and viral diseases.[5] The presentation may include headache, fever, shivering, joint pain, vomiting, hemolytic anemia, jaundice, hemoglobin in the urine, retinal damage, and convulsions.[14]

The classic symptom of malaria is paroxysm—a cyclical occurrence of sudden coldness followed by shivering and then fever and sweating, occurring every two days (tertian fever) in P. vivax and P. ovale infections, and every three days (quartan fever) for P. malariae. P. falciparum infection can cause recurrent fever every 36–48 hours, or a less pronounced and almost continuous fever.[15]

Severe malaria is usually caused by P. falciparum (often referred to as falciparum malaria). Symptoms of falciparum malaria arise 9–30 days after infection.[13] Individuals with cerebral malaria frequently exhibit neurological symptoms, including abnormal posturing, nystagmus, conjugate gaze palsy (failure of the eyes to turn together in the same direction), opisthotonus, seizures, or coma.[13]

Complications

Malaria has several serious complications. Among these is the development of respiratory distress, which occurs in up to 25% of adults and 40% of children with severe P. falciparum malaria. Possible causes include respiratory compensation of metabolic acidosis, noncardiogenic pulmonary oedema, concomitant pneumonia, and severe anaemia. Although rare in young children with severe malaria, acute respiratory distress syndrome occurs in 5–25% of adults and up to 29% of pregnant women.[16]
Coinfection of HIV with malaria increases mortality.[17] Renal failure is a feature of blackwater fever, where hemoglobin from lysed red blood cells leaks into the urine.[13]

Infection with P. falciparum may result in cerebral malaria, a form of severe malaria that involves encephalopathy. It is associated with retinal whitening, which may be a useful clinical sign in distinguishing malaria from other causes of fever.[18] Enlarged spleen, enlarged liver or both of these, severe headache, low blood sugar, and hemoglobin in the urine with renal failure may occur.[13] Complications may include spontaneous bleeding, coagulopathy, and shock.[19]

Malaria in pregnant women is an important cause of stillbirths, infant mortality, abortion and low birth weight,[20] particularly in P. falciparum infection, but also with P. vivax.[21]

Cause

Malaria parasites belong to the genus Plasmodium (phylum Apicomplexa). In humans, malaria is caused by P. falciparum, P. malariae, P. ovale, P. vivax and P. knowlesi.[22][23] Among those infected, P. falciparum is the most common species identified (~75%) followed by P. vivax (~20%).[5] Although P. falciparum traditionally accounts for the majority of deaths,[24] recent evidence suggests that P. vivax malaria is associated with potentially life-threatening conditions about as often as with a diagnosis of P. falciparum infection.[25] P. vivax proportionally is more common outside Africa.[26] There have been documented human infections with several species of Plasmodium from higher apes; however, except for P. knowlesi—a zoonotic species that causes malaria in macaques[23]—these are mostly of limited public health importance.[27]

Global warming is likely to affect malaria transmission, but the severity and geographic distribution of such effects is uncertain.[28][29]

Life cycle

The life cycle of malaria parasites. A mosquito causes an infection by a bite. First, sporozoites enter the bloodstream, and migrate to the liver. They infect liver cells, where they multiply into merozoites, rupture the liver cells, and return to the bloodstream. The merozoites infect red blood cells, where they develop into ring forms, trophozoites and schizonts that in turn produce further merozoites. Sexual forms are also produced, which, if taken up by a mosquito, will infect the insect and continue the life cycle.

In the life cycle of Plasmodium, a female Anopheles mosquito (the definitive host) transmits a motile infective form (called the sporozoite) to a vertebrate host such as a human (the secondary host), thus acting as a transmission vector. A sporozoite travels through the blood vessels to liver cells (hepatocytes), where it reproduces asexually (tissue schizogony), producing thousands of merozoites. These infect new red blood cells and initiate a series of asexual multiplication cycles (blood schizogony) that produce 8 to 24 new infective merozoites, at which point the cells burst and the infective cycle begins anew.[30]

Other merozoites develop into immature gametocytes, which are the precursors of male and female gametes. When a fertilized mosquito bites an infected person, gametocytes are taken up with the blood and mature in the mosquito gut. The male and female gametocytes fuse and form an ookinete—a fertilized, motile zygote. Ookinetes develop into new sporozoites that migrate to the insect's salivary glands, ready to infect a new vertebrate host. The sporozoites are injected into the skin, in the saliva, when the mosquito takes a subsequent blood meal.[31]

Only female mosquitoes feed on blood; male mosquitoes feed on plant nectar and do not transmit the disease. The females of the Anopheles genus of mosquito prefer to feed at night. They usually start searching for a meal at dusk and will continue throughout the night until taking a meal.[32] Malaria parasites can also be transmitted by blood transfusions, although this is rare.[33]

Recurrent malaria

Symptoms of malaria can recur after varying symptom-free periods. Depending upon the cause, recurrence can be classified as either recrudescence, relapse, or reinfection. Recrudescence is when symptoms return after a symptom-free period. It is caused by parasites surviving in the blood as a result of inadequate or ineffective treatment.[34] Relapse is when symptoms reappear after the parasites have been eliminated from blood but persist as dormant hypnozoites in liver cells. Relapse commonly occurs between 8–24 weeks and is commonly seen with P. vivax and P. ovale infections.[5]  
P. vivax malaria cases in temperate areas often involve overwintering by hypnozoites, with relapses beginning the year after the mosquito bite.[35] Reinfection means the parasite that caused the past infection was eliminated from the body but a new parasite was introduced. Reinfection cannot readily be distinguished from recrudescence, although recurrence of infection within two weeks of treatment for the initial infection is typically attributed to treatment failure.[36] People may develop some immunity when exposed to frequent infections.[37]

Pathophysiology

Micrograph of a placenta from a stillbirth due to maternal malaria. H&E stain. Red blood cells are anuclear; blue/black staining in bright red structures (red blood cells) indicate foreign nuclei from the parasites.
Electron micrograph of a Plasmodium falciparum-infected red blood cell (center), illustrating adhesion protein "knobs"

Malaria infection develops via two phases: one that involves the liver (exoerythrocytic phase), and one that involves red blood cells, or erythrocytes (erythrocytic phase). When an infected mosquito pierces a person's skin to take a blood meal, sporozoites in the mosquito's saliva enter the bloodstream and migrate to the liver where they infect hepatocytes, multiplying asexually and asymptomatically for a period of 8–30 days.[38]

After a potential dormant period in the liver, these organisms differentiate to yield thousands of merozoites, which, following rupture of their host cells, escape into the blood and infect red blood cells to begin the erythrocytic stage of the life cycle.[38] The parasite escapes from the liver undetected by wrapping itself in the cell membrane of the infected host liver cell.[39]

Within the red blood cells, the parasites multiply further, again asexually, periodically breaking out of their host cells to invade fresh red blood cells. Several such amplification cycles occur. Thus, classical descriptions of waves of fever arise from simultaneous waves of merozoites escaping and infecting red blood cells.[38]

Some P. vivax sporozoites do not immediately develop into exoerythrocytic-phase merozoites, but instead, produce hypnozoites that remain dormant for periods ranging from several months (7–10 months is typical) to several years. After a period of dormancy, they reactivate and produce merozoites. Hypnozoites are responsible for long incubation and late relapses in P. vivax infections,[35] although their existence in P. ovale is uncertain.[40]

The parasite is relatively protected from attack by the body's immune system because for most of its human life cycle it resides within the liver and blood cells and is relatively invisible to immune surveillance. However, circulating infected blood cells are destroyed in the spleen. To avoid this fate, the P. falciparum parasite displays adhesive proteins on the surface of the infected blood cells, causing the blood cells to stick to the walls of small blood vessels, thereby sequestering the parasite from passage through the general circulation and the spleen.[41] The blockage of the microvasculature causes symptoms such as in placental malaria.[42] Sequestered red blood cells can breach the blood–brain barrier and cause cerebral malaria.[43]

Genetic resistance

According to a 2005 review, due to the high levels of mortality and morbidity caused by malaria—especially the P. falciparum species—it has placed the greatest selective pressure on the human genome in recent history. Several genetic factors provide some resistance to it including sickle cell trait, thalassaemia traits, glucose-6-phosphate dehydrogenase deficiency, and the absence of Duffy antigens on red blood cells.[44][45]
The impact of sickle cell trait on malaria immunity illustrates some evolutionary trade-offs that have occurred because of endemic malaria. Sickle cell trait causes a change in the hemoglobin molecule in the blood. Normally, red blood cells have a very flexible, biconcave shape that allows them to move through narrow capillaries; however, when the modified hemoglobin S molecules are exposed to low amounts of oxygen, or crowd together due to dehydration, they can stick together forming strands that cause the cell to sickle or distort into a curved shape. In these strands the molecule is not as effective in taking or releasing oxygen, and the cell is not flexible enough to circulate freely. In the early stages of malaria, the parasite can cause infected red cells to sickle, and so they are removed from circulation sooner. This reduces the frequency with which malaria parasites complete their life cycle in the cell. Individuals who are homozygous (with two copies of the abnormal hemoglobin beta allele) have sickle-cell anaemia, while those who are heterozygous (with one abnormal allele and one normal allele) experience resistance to malaria without severe anemia. Although the shorter life expectancy for those with the homozygous condition would tend to disfavor the trait's survival, the trait is preserved in malaria-prone regions because of the benefits provided by the heterozygous form.[45][46]

Liver dysfunction

Liver dysfunction as a result of malaria is uncommon and usually only occurs in those with another liver condition such as viral hepatitis or chronic liver disease. The syndrome is sometimes called malarial hepatitis.[47] While it has been considered a rare occurrence, malarial hepatopathy has seen an increase, particularly in Southeast Asia and India. Liver compromise in people with malaria correlates with a greater likelihood of complications and death.[47]

Diagnosis

The blood film is the gold standard for malaria diagnosis.
Ring-forms and gametocytes of Plasmodium falciparum in human blood

Owing to the non-specific nature of the presentation of symptoms, diagnosis of malaria in non-endemic areas requires a high degree of suspicion, which might be elicited by any of the following: recent travel history, enlarged spleen, fever, low number of platelets in the blood, and higher-than-normal levels of bilirubin in the blood combined with a normal level of white blood cells.[5]

Malaria is usually confirmed by the microscopic examination of blood films or by antigen-based rapid diagnostic tests (RDT).[48][49] In some areas, RDTs need to be able to distinguish whether the malaria symptoms are caused by Plasmodium falciparum or by other species of parasites since treatment strategies could differ for non-P. falciparum infections.[50] Microscopy is the most commonly used method to detect the malarial parasite—about 165 million blood films were examined for malaria in 2010.[51] Despite its widespread usage, diagnosis by microscopy suffers from two main drawbacks: many settings (especially rural) are not equipped to perform the test, and the accuracy of the results depends on both the skill of the person examining the blood film and the levels of the parasite in the blood. The sensitivity of blood films ranges from 75–90% in optimum conditions, to as low as 50%. Commercially available RDTs are often more accurate than blood films at predicting the presence of malaria parasites, but they are widely variable in diagnostic sensitivity and specificity depending on manufacturer, and are unable to tell how many parasites are present.[51]

In regions where laboratory tests are readily available, malaria should be suspected, and tested for, in any unwell person who has been in an area where malaria is endemic. In areas that cannot afford laboratory diagnostic tests, it has become common to use only a history of fever as the indication to treat for malaria—thus the common teaching "fever equals malaria unless proven otherwise". A drawback of this practice is overdiagnosis of malaria and mismanagement of non-malarial fever, which wastes limited resources, erodes confidence in the health care system, and contributes to drug resistance.[52] Although polymerase chain reaction-based tests have been developed, they are not widely used in areas where malaria is common as of 2012, due to their complexity.[5]

Classification

Malaria is classified into either "severe" or "uncomplicated" by the World Health Organization (WHO).[5] It is deemed severe when any of the following criteria are present, otherwise it is considered uncomplicated.[53]
Cerebral malaria is defined as a severe P. falciparum-malaria presenting with neurological symptoms, including coma (with a Glasgow coma scale less than 11, or a Blantyre coma scale greater than 3), or with a coma that lasts longer than 30 minutes after a seizure.[54]

Various types of malaria have been called by the names below:[55]
Name Pathogen Notes
algid malaria Plasmodium falciparum severe malaria affecting the cardiovascular system and causing chills and circulatory shock
bilious malaria Plasmodium falciparum severe malaria affecting the liver and causing vomiting and jaundice
cerebral malaria Plasmodium falciparum severe malaria affecting the cerebrum
congenital malaria various plasmodia plasmodium introduced from the mother via the fetal circulation
falciparum malaria, Plasmodium falciparum malaria, pernicious malaria Plasmodium falciparum
ovale malaria, Plasmodium ovale malaria Plasmodium ovale
quartan malaria, malariae malaria, Plasmodium malariae malaria Plasmodium malariae paroxysms every fourth day (quartan), counting the day of occurrence as the first day
quotidian malaria Plasmodium falciparum, Plasmodium vivax paroxysms daily (quotidian)
tertian malaria Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax paroxysms every third day (tertian), counting the day of occurrence as the first
transfusion malaria various plasmodia plasmodium introduced by blood transfusion, needle sharing, or needlestick injury
vivax malaria, Plasmodium vivax malaria Plasmodium vivax

Prevention

An Anopheles stephensi mosquito shortly after obtaining blood from a human (the droplet of blood is expelled as a surplus). This mosquito is a vector of malaria, and mosquito control is an effective way of reducing its incidence.

Methods used to prevent malaria include medications, mosquito elimination and the prevention of bites. There is no vaccine for malaria. The presence of malaria in an area requires a combination of high human population density, high anopheles mosquito population density and high rates of transmission from humans to mosquitoes and from mosquitoes to humans. If any of these is lowered sufficiently, the parasite will eventually disappear from that area, as happened in North America, Europe and parts of the Middle East. However, unless the parasite is eliminated from the whole world, it could become re-established if conditions revert to a combination that favors the parasite's reproduction. Furthermore, the cost per person of eliminating anopheles mosquitoes rises with decreasing population density, making it economically unfeasible in some areas.[56]

Prevention of malaria may be more cost-effective than treatment of the disease in the long run, but the initial costs required are out of reach of many of the world's poorest people. There is a wide difference in the costs of control (i.e. maintenance of low endemicity) and elimination programs between countries. For example, in China—whose government in 2010 announced a strategy to pursue malaria elimination in the Chinese provinces—the required investment is a small proportion of public expenditure on health. In contrast, a similar program in Tanzania would cost an estimated one-fifth of the public health budget.[57]

In areas where malaria is common, children under five years old often have anemia which is sometimes due to malaria. Giving children with anemia in these areas preventive antimalarial medication improves red blood cell levels slightly but did not affect the risk of death or need for hospitalization.[58]

Mosquito control

Man spraying kerosene oil in standing water, Panama Canal Zone 1912

Vector control refers to methods used to decrease malaria by reducing the levels of transmission by mosquitoes. For individual protection, the most effective insect repellents are based on DEET or picaridin.[59] Insecticide-treated mosquito nets (ITNs) and indoor residual spraying (IRS) have been shown to be highly effective in preventing malaria among children in areas where malaria is common.[60][61] Prompt treatment of confirmed cases with artemisinin-based combination therapies (ACTs) may also reduce transmission.[62]
Walls where indoor residual spraying of DDT has been applied. The mosquitoes remain on the wall until they fall down dead on the floor.
A mosquito net in use.

Mosquito nets help keep mosquitoes away from people and reduce infection rates and transmission of malaria. Nets are not a perfect barrier and are often treated with an insecticide designed to kill the mosquito before it has time to find a way past the net. Insecticide-treated nets are estimated to be twice as effective as untreated nets and offer greater than 70% protection compared with no net.[63] Between 2000 and 2008, the use of ITNs saved the lives of an estimated 250,000 infants in Sub-Saharan Africa.[64] About 13% of households in Sub-Saharan countries owned ITNs in 2007[65] and 31% of African households were estimated to own at least one ITN in 2008. In 2000, 1.7 million (1.8%) African children living in areas of the world where malaria is common were protected by an ITN. That number increased to 20.3 million (18.5%) African children using ITNs in 2007, leaving 89.6 million children unprotected[66] and to 68% African children using mosquito nets in 2015.[67] Most nets are impregnated with pyrethroids, a class of insecticides with low toxicity. They are most effective when used from dusk to dawn.[68] It is recommended to hang a large "bed net" above the center of a bed and either tuck the edges under the mattress or make sure it is large enough such that it touches the ground.[69]

Indoor residual spraying is the spraying of insecticides on the walls inside a home. After feeding, many mosquitoes rest on a nearby surface while digesting the bloodmeal, so if the walls of houses have been coated with insecticides, the resting mosquitoes can be killed before they can bite another person and transfer the malaria parasite.[70] As of 2006, the World Health Organization recommends 12 insecticides in IRS operations, including DDT and the pyrethroids cyfluthrin and deltamethrin.[71] This public health use of small amounts of DDT is permitted under the Stockholm Convention, which prohibits its agricultural use.[72] One problem with all forms of IRS is insecticide resistance. Mosquitoes affected by IRS tend to rest and live indoors, and due to the irritation caused by spraying, their descendants tend to rest and live outdoors, meaning that they are less affected by the IRS.[73]

There are a number of other methods to reduce mosquito bites and slow the spread of malaria. Efforts to decrease mosquito larva by decreasing the availability of open water in which they develop or by adding substances to decrease their development is effective in some locations.[74] Electronic mosquito repellent devices which make very high-frequency sounds that are supposed to keep female mosquitoes away, do not have supporting evidence.[75]

Other methods

Community participation and health education strategies promoting awareness of malaria and the importance of control measures have been successfully used to reduce the incidence of malaria in some areas of the developing world.[76] Recognizing the disease in the early stages can prevent the disease from becoming fatal. Education can also inform people to cover over areas of stagnant, still water, such as water tanks that are ideal breeding grounds for the parasite and mosquito, thus cutting down the risk of the transmission between people. This is generally used in urban areas where there are large centers of population in a confined space and transmission would be most likely in these areas.[77] Intermittent preventive therapy is another intervention that has been used successfully to control malaria in pregnant women and infants,[78] and in preschool children where transmission is seasonal.[79]

Medications

There are a number of drugs that can help prevent or interrupt malaria in travelers to places where infection is common. Many of these drugs are also used in treatment. Chloroquine may be used where chloroquine-resistant parasites are not common.[80] In places where Plasmodium is resistant to one or more medications, three medications—mefloquine (Lariam), doxycycline (available generically), or the combination of atovaquone and proguanil hydrochloride (Malarone)—are frequently used when prophylaxis is needed.[80] Doxycycline and the atovaquone plus proguanil combination are the best tolerated; mefloquine is associated with death, suicide, and neurological and psychiatric symptoms.[80]
The protective effect does not begin immediately, and people visiting areas where malaria exists usually start taking the drugs one to two weeks before arriving and continue taking them for four weeks after leaving (except for atovaquone/proguanil, which only needs to be started two days before and continued for seven days afterward).[81] The use of preventative drugs is often not practical for those who live in areas where malaria exists, and their use is usually only in pregnant women and short-term visitors. This is due to the cost of the drugs, side effects from long-term use, and the difficulty in obtaining anti-malarial drugs outside of wealthy nations.[82] During pregnancy, medication to prevent malaria has been found to improve the weight of the baby at birth and decrease the risk of anemia in the mother.[83] The use of preventative drugs where malaria-bearing mosquitoes are present may encourage the development of partial resistance.[84]

Treatment

Advertisement entitled "The Mosquito Danger". Includes 6 panel cartoon: #1 breadwinner has malaria, family starving; #2 wife selling ornaments; #3 doctor administers quinine; #4 patient recovers; #5 doctor indicating that quinine can be obtained from post office if needed again; #6 man who refused quinine, dead on stretcher.
An advertisement for quinine as a malaria treatment from 1927.

Malaria is treated with antimalarial medications; the ones used depends on the type and severity of the disease. While medications against fever are commonly used, their effects on outcomes are not clear.[85]

Simple or uncomplicated malaria may be treated with oral medications. The most effective treatment for P. falciparum infection is the use of artemisinins in combination with other antimalarials (known as artemisinin-combination therapy, or ACT), which decreases resistance to any single drug component.[86] These additional antimalarials include: amodiaquine, lumefantrine, mefloquine or sulfadoxine/pyrimethamine.[87] Another recommended combination is dihydroartemisinin and piperaquine.[88][89] ACT is about 90% effective when used to treat uncomplicated malaria.[64] To treat malaria during pregnancy, the WHO recommends the use of quinine plus clindamycin early in the pregnancy (1st trimester), and ACT in later stages (2nd and 3rd trimesters).[90] In the 2000s (decade), malaria with partial resistance to artemisins emerged in Southeast Asia.[91][92] Infection with P. vivax, P. ovale or P. malariae usually do not require hospitalization. Treatment of P. vivax requires both treatment of blood stages (with chloroquine or ACT) and clearance of liver forms with primaquine.[93] Treatment with tafenoquine prevents relapses after confirmed P. vivax malaria.[94]

Severe and complicated malaria are almost always caused by infection with P. falciparum. The other species usually cause only febrile disease.[95] Severe and complicated malaria are medical emergencies since mortality rates are high (10% to 50%).[96] Cerebral malaria is the form of severe and complicated malaria with the worst neurological symptoms.[97] Recommended treatment for severe malaria is the intravenous use of antimalarial drugs. For severe malaria, parenteral artesunate was superior to quinine in both children and adults.[98] In another systematic review, artemisinin derivatives (artemether and arteether) were as efficacious as quinine in the treatment of cerebral malaria in children.[99] Treatment of severe malaria involves supportive measures that are best done in a critical care unit. This includes the management of high fevers and the seizures that may result from it. It also includes monitoring for poor breathing effort, low blood sugar, and low blood potassium.[24]

Resistance

Drug resistance poses a growing problem in 21st-century malaria treatment.[100] Resistance is now common against all classes of antimalarial drugs apart from artemisinins. Treatment of resistant strains became increasingly dependent on this class of drugs. The cost of artemisinins limits their use in the developing world.[101] Malaria strains found on the Cambodia–Thailand border are resistant to combination therapies that include artemisinins, and may, therefore, be untreatable.[102] Exposure of the parasite population to artemisinin monotherapies in subtherapeutic doses for over 30 years and the availability of substandard artemisinins likely drove the selection of the resistant phenotype.[103] Resistance to artemisinin has been detected in Cambodia, Myanmar, Thailand, and Vietnam,[104] and there has been emerging resistance in Laos.[105][106]

Prognosis

Disability-adjusted life year for malaria per 100,000 inhabitants in 2004
   no data
   <10 div="">
   0–100
   100–500
   500–1000
  1000–1500
  1500–2000
  2000–2500
  2500–2750
  2750–3000
  3000–3250
  3250–3500
   ≥3500

When properly treated, people with malaria can usually expect a complete recovery.[107] However, severe malaria can progress extremely rapidly and cause death within hours or days.[108] In the most severe cases of the disease, fatality rates can reach 20%, even with intensive care and treatment.[5] Over the longer term, developmental impairments have been documented in children who have suffered episodes of severe malaria.[109] Chronic infection without severe disease can occur in an immune-deficiency syndrome associated with a decreased responsiveness to Salmonella bacteria and the Epstein–Barr virus.[110]

During childhood, malaria causes anemia during a period of rapid brain development, and also direct brain damage resulting from cerebral malaria.[109] Some survivors of cerebral malaria have an increased risk of neurological and cognitive deficits, behavioural disorders, and epilepsy.[111] Malaria prophylaxis was shown to improve cognitive function and school performance in clinical trials when compared to placebo groups.[109]

Epidemiology

Distribution of malaria in the world:[112]  Elevated occurrence of chloroquine- or multi-resistant malaria
 Occurrence of chloroquine-resistant malaria
 No Plasmodium falciparum or chloroquine-resistance
 No malaria
Deaths due to malaria per million persons in 2012
  0–0
  1–2
  3–54
  55–325
  326–679
  680–949
  950–1,358

The WHO estimates that in 2015 there were 214 million new cases of malaria resulting in 438,000 deaths.[113] Others have estimated the number of cases at between 350 and 550 million for falciparum malaria[114] The majority of cases (65%) occur in children under 15 years old.[115] About 125 million pregnant women are at risk of infection each year; in Sub-Saharan Africa, maternal malaria is associated with up to 200,000 estimated infant deaths yearly.[20] There are about 10,000 malaria cases per year in Western Europe, and 1300–1500 in the United States.[16] About 900 people died from the disease in Europe between 1993 and 2003.[59] Both the global incidence of disease and resulting mortality have declined in recent years. According to the WHO and UNICEF, deaths attributable to malaria in 2015 were reduced by 60%[67] from a 2000 estimate of 985,000, largely due to the widespread use of insecticide-treated nets and artemisinin-based combination therapies.[64] In 2012, there were 207 million cases of malaria. That year, the disease is estimated to have killed between 473,000 and 789,000 people, many of whom were children in Africa.[2] Efforts at decreasing the disease in Africa since the turn of millennium have been partially effective, with rates of the disease dropping by an estimated forty percent on the continent.[116]

Malaria is presently endemic in a broad band around the equator, in areas of the Americas, many parts of Asia, and much of Africa; in Sub-Saharan Africa, 85–90% of malaria fatalities occur.[117] An estimate for 2009 reported that countries with the highest death rate per 100,000 of population were Ivory Coast (86.15), Angola (56.93) and Burkina Faso (50.66).[118] A 2010 estimate indicated the deadliest countries per population were Burkina Faso, Mozambique and Mali.[115] The Malaria Atlas Project aims to map global endemic levels of malaria, providing a means with which to determine the global spatial limits of the disease and to assess disease burden.[119][120] This effort led to the publication of a map of P. falciparum endemicity in 2010.[121] As of 2010, about 100 countries have endemic malaria.[122][123] Every year, 125 million international travellers visit these countries, and more than 30,000 contract the disease.[59]

The geographic distribution of malaria within large regions is complex, and malaria-afflicted and malaria-free areas are often found close to each other.[124] Malaria is prevalent in tropical and subtropical regions because of rainfall, consistent high temperatures and high humidity, along with stagnant waters in which mosquito larvae readily mature, providing them with the environment they need for continuous breeding.[125] In drier areas, outbreaks of malaria have been predicted with reasonable accuracy by mapping rainfall.[126] Malaria is more common in rural areas than in cities. For example, several cities in the Greater Mekong Subregion of Southeast Asia are essentially malaria-free, but the disease is prevalent in many rural regions, including along international borders and forest fringes.[127] In contrast, malaria in Africa is present in both rural and urban areas, though the risk is lower in the larger cities.[128]

History

Ancient malaria oocysts preserved in Dominican amber

Although the parasite responsible for P. falciparum malaria has been in existence for 50,000–100,000 years, the population size of the parasite did not increase until about 10,000 years ago, concurrently with advances in agriculture[129] and the development of human settlements. Close relatives of the human malaria parasites remain common in chimpanzees. Some evidence suggests that the P. falciparum malaria may have originated in gorillas.[130]

References to the unique periodic fevers of malaria are found throughout recorded history.[131] Hippocrates described periodic fevers, labelling them tertian, quartan, subtertian and quotidian.[132] The Roman Columella associated the disease with insects from swamps.[132] Malaria may have contributed to the decline of the Roman Empire,[133] and was so pervasive in Rome that it was known as the "Roman fever".[134] Several regions in ancient Rome were considered at-risk for the disease because of the favourable conditions present for malaria vectors. This included areas such as southern Italy, the island of Sardinia, the Pontine Marshes, the lower regions of coastal Etruria and the city of Rome along the Tiber River. The presence of stagnant water in these places was preferred by mosquitoes for breeding grounds. Irrigated gardens, swamp-like grounds, runoff from agriculture, and drainage problems from road construction led to the increase of standing water.[135]
British doctor Ronald Ross received the Nobel Prize for Physiology or Medicine in 1902 for his work on malaria.

The term malaria originates from Medieval Italian: mala aria—"bad air"; the disease was formerly called ague or marsh fever due to its association with swamps and marshland.[136] The term first appeared in the English literature about 1829.[132] Malaria was once common in most of Europe and North America,[137] where it is no longer endemic,[138] though imported cases do occur.[139]

Scientific studies on malaria made their first significant advance in 1880, when Charles Louis Alphonse Laveran—a French army doctor working in the military hospital of Constantine in Algeria—observed parasites inside the red blood cells of infected people for the first time. He, therefore, proposed that malaria is caused by this organism, the first time a protist was identified as causing disease.[140] For this and later discoveries, he was awarded the 1907 Nobel Prize for Physiology or Medicine. A year later, Carlos Finlay, a Cuban doctor treating people with yellow fever in Havana, provided strong evidence that mosquitoes were transmitting disease to and from humans.[141] This work followed earlier suggestions by Josiah C. Nott,[142] and work by Sir Patrick Manson, the "father of tropical medicine", on the transmission of filariasis.[143]
Chinese traditional Chinese medicine researcher Tu Youyou received the Nobel Prize for Physiology or Medicine in 2015 for her work on antimalarial drug artemisin.

In April 1894, a Scottish physician Sir Ronald Ross visited Sir Patrick Manson at his house on Queen Anne Street, London. This visit was the start of four years of collaboration and fervent research that culminated in 1898 when Ross, who was working in the Presidency General Hospital in Calcutta, proved the complete life-cycle of the malaria parasite in mosquitoes. He thus proved that the mosquito was the vector for malaria in humans by showing that certain mosquito species transmit malaria to birds. He isolated malaria parasites from the salivary glands of mosquitoes that had fed on infected birds.[144] For this work, Ross received the 1902 Nobel Prize in Medicine. After resigning from the Indian Medical Service, Ross worked at the newly established Liverpool School of Tropical Medicine and directed malaria-control efforts in Egypt, Panama, Greece and Mauritius.[145] The findings of Finlay and Ross were later confirmed by a medical board headed by Walter Reed in 1900. Its recommendations were implemented by William C. Gorgas in the health measures undertaken during construction of the Panama Canal. This public-health work saved the lives of thousands of workers and helped develop the methods used in future public-health campaigns against the disease.[146]
Artemisia annua, source of the antimalarial drug artemisin

The first effective treatment for malaria came from the bark of cinchona tree, which contains quinine. This tree grows on the slopes of the Andes, mainly in Peru. The indigenous peoples of Peru made a tincture of cinchona to control fever. Its effectiveness against malaria was found and the Jesuits introduced the treatment to Europe around 1640; by 1677, it was included in the London Pharmacopoeia as an antimalarial treatment.[147] It was not until 1820 that the active ingredient, quinine, was extracted from the bark, isolated and named by the French chemists Pierre Joseph Pelletier and Joseph Bienaimé Caventou.[148][149]

Quinine became the predominant malarial medication until the 1920s when other medications began to be developed. In the 1940s, chloroquine replaced quinine as the treatment of both uncomplicated and severe malaria until resistance supervened, first in Southeast Asia and South America in the 1950s and then globally in the 1980s.[150]

The medicinal value of Artemisia annua has been used by Chinese herbalists in traditional Chinese medicines for 2,000 years. In 1596, Li Shizhen recommended tea made from qinghao specifically to treat malaria symptoms in his "Compendium of Materia Medica". Artemisinins, discovered by Chinese scientist Tu Youyou and colleagues in the 1970s from the plant Artemisia annua, became the recommended treatment for P. falciparum malaria, administered in combination with other antimalarials as well as in severe disease.[151] Tu says she was influenced by a traditional Chinese herbal medicine source, The Handbook of Prescriptions for Emergency Treatments, written in 340 by Ge Hong.[152] For her work on malaria, Tu Youyou received the 2015 Nobel Prize in Physiology or Medicine.[153]

Plasmodium vivax was used between 1917 and the 1940s for malariotherapy—deliberate injection of malaria parasites to induce a fever to combat certain diseases such as tertiary syphilis. In 1927, the inventor of this technique, Julius Wagner-Jauregg, received the Nobel Prize in Physiology or Medicine for his discoveries. The technique was dangerous, killing about 15% of patients, so it is no longer in use.[154]
U.S. Marines with malaria in a rough field hospital on Guadalcanal, October 1942

The first pesticide used for indoor residual spraying was DDT.[155] Although it was initially used exclusively to combat malaria, its use quickly spread to agriculture. In time, pest control, rather than disease control, came to dominate DDT use, and this large-scale agricultural use led to the evolution of resistant mosquitoes in many regions. The DDT resistance shown by Anopheles mosquitoes can be compared to antibiotic resistance shown by bacteria. During the 1960s, awareness of the negative consequences of its indiscriminate use increased, ultimately leading to bans on agricultural applications of DDT in many countries in the 1970s.[72] Before DDT, malaria was successfully eliminated or controlled in tropical areas like Brazil and Egypt by removing or poisoning the breeding grounds of the mosquitoes or the aquatic habitats of the larva stages, for example by applying the highly toxic arsenic compound Paris Green to places with standing water.[156]

Malaria vaccines have been an elusive goal of research. The first promising studies demonstrating the potential for a malaria vaccine were performed in 1967 by immunizing mice with live, radiation-attenuated sporozoites, which provided significant protection to the mice upon subsequent injection with normal, viable sporozoites. Since the 1970s, there has been a considerable effort to develop similar vaccination strategies for humans.[157] The first vaccine, called RTS,S, was approved by European regulators in 2015.[158]

Society and culture

Economic impact

Malaria clinic in Tanzania

Malaria is not just a disease commonly associated with poverty: some evidence suggests that it is also a cause of poverty and a major hindrance to economic development.[9][10] Although tropical regions are most affected, malaria's furthest influence reaches into some temperate zones that have extreme seasonal changes. The disease has been associated with major negative economic effects on regions where it is widespread. During the late 19th and early 20th centuries, it was a major factor in the slow economic development of the American southern states.[159]

A comparison of average per capita GDP in 1995, adjusted for parity of purchasing power, between countries with malaria and countries without malaria gives a fivefold difference ($1,526 USD versus $8,268 USD). In the period 1965 to 1990, countries where malaria was common had an average per capita GDP that increased only 0.4% per year, compared to 2.4% per year in other countries.[160]

Poverty can increase the risk of malaria since those in poverty do not have the financial capacities to prevent or treat the disease. In its entirety, the economic impact of malaria has been estimated to cost Africa US$12 billion every year. The economic impact includes costs of health care, working days lost due to sickness, days lost in education, decreased productivity due to brain damage from cerebral malaria, and loss of investment and tourism.[11] The disease has a heavy burden in some countries, where it may be responsible for 30–50% of hospital admissions, up to 50% of outpatient visits, and up to 40% of public health spending.[161]
Child with malaria in Ethiopia

Cerebral malaria is one of the leading causes of neurological disabilities in African children.[111] Studies comparing cognitive functions before and after treatment for severe malarial illness continued to show significantly impaired school performance and cognitive abilities even after recovery.[109] Consequently, severe and cerebral malaria have far-reaching socioeconomic consequences that extend beyond the immediate effects of the disease.[162]

Counterfeit and substandard drugs

Sophisticated counterfeits have been found in several Asian countries such as Cambodia,[163] China,[164] Indonesia, Laos, Thailand, and Vietnam, and are an important cause of avoidable death in those countries.[165] The WHO said that studies indicate that up to 40% of artesunate-based malaria medications are counterfeit, especially in the Greater Mekong region and have established a rapid alert system to enable information about counterfeit drugs to be rapidly reported to the relevant authorities in participating countries.[166] There is no reliable way for doctors or lay people to detect counterfeit drugs without help from a laboratory. Companies are attempting to combat the persistence of counterfeit drugs by using new technology to provide security from source to distribution.[167]

Another clinical and public health concern is the proliferation of substandard antimalarial medicines resulting from inappropriate concentration of ingredients, contamination with other drugs or toxic impurities, poor quality ingredients, poor stability and inadequate packaging.[168] A 2012 study demonstrated that roughly one-third of antimalarial medications in Southeast Asia and Sub-Saharan Africa failed chemical analysis, packaging analysis, or were falsified.[169]

War

World War II poster

Throughout history, the contraction of malaria has played a prominent role in the fates of government rulers, nation-states, military personnel, and military actions.[170] In 1910, Nobel Prize in Medicine-winner Ronald Ross (himself a malaria survivor), published a book titled The Prevention of Malaria that included a chapter titled "The Prevention of Malaria in War." The chapter's author, Colonel C. H. Melville, Professor of Hygiene at Royal Army Medical College in London, addressed the prominent role that malaria has historically played during wars: "The history of malaria in war might almost be taken to be the history of war itself, certainly the history of war in the Christian era. ... It is probably the case that many of the so-called camp fevers, and probably also a considerable proportion of the camp dysentery, of the wars of the sixteenth, seventeenth and eighteenth centuries were malarial in origin."[171]

Malaria was the most significant health hazard encountered by U.S. troops in the South Pacific during World War II, where about 500,000 men were infected.[172] According to Joseph Patrick Byrne, "Sixty thousand American soldiers died of malaria during the African and South Pacific campaigns."[173]

Significant financial investments have been made to procure existing and create new anti-malarial agents. During World War I and World War II, inconsistent supplies of the natural anti-malaria drugs cinchona bark and quinine prompted substantial funding into research and development of other drugs and vaccines. American military organizations conducting such research initiatives include the Navy Medical Research Center, Walter Reed Army Institute of Research, and the U.S. Army Medical Research Institute of Infectious Diseases of the US Armed Forces.[174]

Additionally, initiatives have been founded such as Malaria Control in War Areas (MCWA), established in 1942, and its successor, the Communicable Disease Center (now known as the Centers for Disease Control and Prevention, or CDC) established in 1946. According to the CDC, MCWA "was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic".[175]

Eradication efforts

Members of the Malaria Commission of the League of Nations collecting larvae on the Danube delta, 1929

Several notable attempts are being made to eliminate the parasite from sections of the world, or to eradicate it worldwide. In 2006, the organization Malaria No More set a public goal of eliminating malaria from Africa by 2015, and the organization plans to dissolve if that goal is accomplished.[176] Several malaria vaccines are in clinical trials, which are intended to provide protection for children in endemic areas and reduce the speed of transmission of the disease. As of 2012, The Global Fund to Fight AIDS, Tuberculosis and Malaria has distributed 230 million insecticide-treated nets intended to stop mosquito-borne transmission of malaria.[177] The U.S.-based Clinton Foundation has worked to manage demand and stabilize prices in the artemisinin market.[178] Other efforts, such as the Malaria Atlas Project, focus on analysing climate and weather information required to accurately predict the spread of malaria based on the availability of habitat of malaria-carrying parasites.[119] The Malaria Policy Advisory Committee (MPAC) of the World Health Organization (WHO) was formed in 2012, "to provide strategic advice and technical input to WHO on all aspects of malaria control and elimination".[179] In November 2013, WHO and the malaria vaccine funders group set a goal to develop vaccines designed to interrupt malaria transmission with the long-term goal of malaria eradication.[180]

Malaria has been successfully eliminated or greatly reduced in certain areas. Malaria was once common in the United States and southern Europe, but vector control programs, in conjunction with the monitoring and treatment of infected humans, eliminated it from those regions. Several factors contributed, such as the draining of wetland breeding grounds for agriculture and other changes in water management practices, and advances in sanitation, including greater use of glass windows and screens in dwellings.[181] Malaria was eliminated from most parts of the USA in the early 20th century by such methods, and the use of the pesticide DDT and other means eliminated it from the remaining pockets in the South in the 1950s as part of the National Malaria Eradication Program.[182] Bill Gates has said that he thinks global eradication is possible by 2040.[183]

Research

The Malaria Eradication Research Agenda (malERA) initiative was a consultative process to identify which areas of research and development (R&D) needed to be addressed for the worldwide eradication of malaria.[184][185]

Vaccine

A vaccine against malaria called RTS,S, was approved by European regulators in 2015.[158] It is undergoing pilot trials in select countries in 2016.
Immunity (or, more accurately, tolerance) to P. falciparum malaria does occur naturally, but only in response to years of repeated infection.[37] An individual can be protected from a P. falciparum infection if they receive about a thousand bites from mosquitoes that carry a version of the parasite rendered non-infective by a dose of X-ray irradiation.[186] The highly polymorphic nature of many P. falciparum proteins results in significant challenges to vaccine design. Vaccine candidates that target antigens on gametes, zygotes, or ookinetes in the mosquito midgut aim to block the transmission of malaria. These transmission-blocking vaccines induce antibodies in the human blood; when a mosquito takes a blood meal from a protected individual, these antibodies prevent the parasite from completing its development in the mosquito.[187] Other vaccine candidates, targeting the blood-stage of the parasite's life cycle, have been inadequate on their own.[188] For example, SPf66 was tested extensively in areas where the disease is common in the 1990s, but trials showed it to be insufficiently effective.[189]

Medications

Malaria parasites contain apicoplasts, organelles usually found in plants, complete with their own genomes. These apicoplasts are thought to have originated through the endosymbiosis of algae and play a crucial role in various aspects of parasite metabolism, such as fatty acid biosynthesis. Over 400 proteins have been found to be produced by apicoplasts and these are now being investigated as possible targets for novel anti-malarial drugs.[190]

With the onset of drug-resistant Plasmodium parasites, new strategies are being developed to combat the widespread disease. One such approach lies in the introduction of synthetic pyridoxal-amino acid adducts, which are taken up by the parasite and ultimately interfere with its ability to create several essential B vitamins.[191][192] Antimalarial drugs using synthetic metal-based complexes are attracting research interest.[193][194]
  • (+)-SJ733: Part of a wider class of experimental drugs called spiroindolone. It inhibits the ATP4 protein of infected red blood cells that cause the cells to shrink and become rigid like the aging cells. This triggers the immune system to eliminate the infected cells from the system as demonstrated in a mouse model. As of 2014, a Phase 1 clinical trial to assess the safety profile in human is planned by the Howard Hughes Medical Institute.[195]
  • NITD246 and NITD609: Also belonged to the class of spiroindolone and target the ATP4 protein.[195]

Other

A non-chemical vector control strategy involves genetic manipulation of malaria mosquitoes. Advances in genetic engineering technologies make it possible to introduce foreign DNA into the mosquito genome and either decrease the lifespan of the mosquito, or make it more resistant to the malaria parasite. Sterile insect technique is a genetic control method whereby large numbers of sterile male mosquitoes are reared and released. Mating with wild females reduces the wild population in the subsequent generation; repeated releases eventually eliminate the target population.[63]

Genomics is central to malaria research. With the sequencing of P. falciparum, one of its vectors Anopheles gambiae, and the human genome, the genetics of all three organisms in the malaria lifecycle can be studied.[196] Another new application of genetic technology is the ability to produce genetically modified mosquitoes that do not transmit malaria, potentially allowing biological control of malaria transmission.[197]

In one study, a genetically-modified strain of Anopheles stephensi was created that no longer supported malaria transmission, and this resistance was passed down to mosquito offspring.[198]

Gene drive is a technique for changing wild populations, for instance to combat insects so they cannot transmit diseases (in particular mosquitoes in the cases of malaria and zika).[199]

Other animals

Nearly 200 parasitic Plasmodium species have been identified that infect birds, reptiles, and other mammals,[200] and about 30 species naturally infect non-human primates.[201] Some malaria parasites that affect non-human primates (NHP) serve as model organisms for human malarial parasites, such as P. coatneyi (a model for P. falciparum) and P. cynomolgi (P. vivax). Diagnostic techniques used to detect parasites in NHP are similar to those employed for humans.[202] Malaria parasites that infect rodents are widely used as models in research, such as P. berghei.[203] Avian malaria primarily affects species of the order Passeriformes, and poses a substantial threat to birds of Hawaii, the Galapagos, and other archipelagoes. The parasite P. relictum is known to play a role in limiting the distribution and abundance of endemic Hawaiian birds. Global warming is expected to increase the prevalence and global distribution of avian malaria, as elevated temperatures provide optimal conditions for parasite reproduction.[204]

Sunday, April 30, 2017

Tetrahydrocannabinol

From Wikipedia, the free encyclopedia

Tetrahydrocannabinol.svg
Delta-9-tetrahydrocannabinol-from-tosylate-xtal-3D-balls.png
Clinical data
Trade names Marinol
License data
Pregnancy
category
  • US: C (Risk not ruled out)
Dependence
liability
8–10% (Relatively low risk of tolerance)[1]
Addiction
liability
Low
Routes of
administration
Orally, local/topical, transdermal, sublingual, inhaled
ATC code
Legal status
Legal status
Pharmacokinetic data
Bioavailability 10–35% (inhalation), 6–20% (oral)[3]
Protein binding 97–99%[3][4][5]
Metabolism Mostly hepatic by CYP2C[3]
Biological half-life 1.6–59 h,[3] 25–36 h (orally administered dronabinol)
Excretion 65–80% (feces), 20–35% (urine) as acid metabolites[3]
Identifiers
Synonyms  (6aR,10aR)-delta-9-tetrahydrocannabinol, (−)-trans-Δ⁹-tetrahydrocannabinol
CAS Number
PubChem CID
IUPHAR/BPS
DrugBank
ChemSpider
UNII
ChEBI
ChEMBL
ECHA InfoCard 100.153.676
Chemical and physical data
Formula C21H30O2
Molar mass 314.469 g/mol
3D model (Jmol)
Specific rotation −152° (ethanol)
Boiling point 157 °C (315 °F) [7]
Solubility in water 0.0028,[6] (23 °C) mg/mL (20 °C)
Tetrahydrocannabinol (THC, dronabinol, trade name Marinol) is the principal psychoactive constituent (or cannabinoid) of cannabis. The pharmaceutical formulation dronabinol, is available by prescription in the U.S. and Canada. It can be a clear, amber or gold colored glassy solid when cold, which becomes viscous and sticky if warmed.

Like most pharmacologically-active secondary metabolites of plants, THC in Cannabis is assumed to be involved in self-defense, perhaps against herbivores.[8] THC also possesses high UV-B (280–315 nm) absorption properties, which, it has been speculated, could protect the plant from harmful UV radiation exposure.[9][10][11]

THC, along with its double bond isomers and their stereoisomers, is one of only three cannabinoids scheduled by the UN Convention on Psychotropic Substances (the other two are dimethylheptylpyran and parahexyl). It was listed under Schedule I in 1971, but reclassified to Schedule II in 1991 following a recommendation from the WHO. Based on subsequent studies, the WHO has recommended the reclassification to the less-stringent Schedule III.[12] Cannabis as a plant is scheduled by the Single Convention on Narcotic Drugs (Schedule I and IV). It is specifically still listed under Schedule I by US federal law[13] under the Controlled Substances Act signed by the US Congress in 1970.

Medical uses

Not to be confused with Droperidol.

Dronabinol is the INN for a pure isomer of THC, (–)-trans-Δ⁹-tetrahydrocannabinol,[14] which is the main isomer found in cannabis. It is used to treat anorexia in people with HIV/AIDS as well as for refractory nausea and vomiting in people undergoing chemotherapy. It is safe and effective for these uses.[15][16]

THC is also an active ingredient in nabiximols, a specific extract of Cannabis that was approved as a botanical drug in the United Kingdom in 2010 as a mouth spray for people with multiple sclerosis to alleviate neuropathic pain, spasticity, overactive bladder, and other symptoms.[17][18]

Adverse effects

A hybrid Cannabis strain (White Widow) (which contains one of the highest amounts of cannabidiol), flower coated with trichomes, which contain more THC than any other part of the plant
Closeup of THC-filled trichomes on a Cannabis sativa leaf

An overdose of dronabinol usually presents with lethargy, decreased motor coordination, slurred speech, and postural hypotension.[19] Non-fatal overdoses have occurred.[20]

A meta analysis of clinical trials conducted using standardized cannabis extracts or THC conducted by the American Academy of Neurology found that of 1619 persons treated with cannabis products (including some treated with smoked cannabis and nabiximols), 6.9% discontinued due to side effects, compared to 2.2% of 1,118 treated with placebo. Detailed information regarding side effects was not available from all trials, but nausea, increased weakness, behavioral or mood changes, suicidal ideation, hallucinations, dizziness, and vasovagal symptoms, fatigue, and feelings of intoxication were each described as side effects in at least two trials. There was a single death rated by the investigator as "possibly related" to treatment. This person had a seizure followed by aspiration pneumonia. The paper does not describe whether this was one of the subjects from the epilepsy trials.[21]

Pharmacology

Mechanism of action

The actions of THC result from its partial agonist activity at the cannabinoid receptor CB1 (Ki=10nM[22]), located mainly in the central nervous system, and the CB2 receptor (Ki=24nM[22]), mainly expressed in cells of the immune system.[23] The psychoactive effects of THC are primarily mediated by the activation of cannabinoid receptors, which result in a decrease in the concentration of the second messenger molecule cAMP through inhibition of adenylate cyclase.[24]
The presence of these specialized cannabinoid receptors in the brain led researchers to the discovery of endocannabinoids, such as anandamide and 2-arachidonoyl glyceride (2-AG). THC targets receptors in a manner far less selective than endocannabinoid molecules released during retrograde signaling, as the drug has a relatively low cannabinoid receptor efficacy and affinity. In populations of low cannabinoid receptor density, THC may act to antagonize endogenous agonists that possess greater receptor efficacy.[25] THC is a lipophilic molecule[26] and may bind non-specifically to a variety of entities in the brain and body, such as adipose tissue (fat).[27][28]

THC, similarly to cannabidiol, albeit less potently, is a positive allosteric modulator of the μ- and δ-opioid receptors.[29]

Due to its partial agonistic activity, THC appears to result in greater downregulation of cannabinoid receptors than endocannabinoids, further limiting its efficacy over other cannabinoids. While tolerance may limit the maximal effects of certain drugs, evidence suggests that tolerance develops irregularly for different effects with greater resistance for primary over side-effects, and may actually serve to enhance the drug's therapeutic window.[25] However, this form of tolerance appears to be irregular throughout mouse brain areas. THC, as well as other cannabinoids that contain a phenol group, possesses mild antioxidant activity sufficient to protect neurons against oxidative stress, such as that produced by glutamate-induced excitotoxicity.[23]

Pharmacokinetics

THC is metabolized mainly to 11-OH-THC by the body. This metabolite is still psychoactive and is further oxidized to 11-nor-9-carboxy-THC (THC-COOH). In humans and animals, more than 100 metabolites could be identified, but 11-OH-THC and THC-COOH are the dominating metabolites.[30] Metabolism occurs mainly in the liver by cytochrome P450 enzymes CYP2C9, CYP2C19, and CYP3A4.[31] More than 55% of THC is excreted in the feces and ~20% in the urine. The main metabolite in urine is the ester of glucuronic acid and THC-COOH and free THC-COOH. In the feces, mainly 11-OH-THC was detected.[32]

Physical and chemical properties

Discovery and structure identification

The discovery of THC, by a team of researchers from Hebrew University Pharmacy School, was first reported in 1964,[33] with substantial later work reported by Raphael Mechoulam in June 1970.[34]

Solubility

An aromatic terpenoid, THC has a very low solubility in water, but good solubility in most organic solvents, specifically lipids and alcohols.[6] THC, CBD, CBN, CBC, CBG and over 113 other molecules make up the phytocannabinoid family.[35][36]

Total Synthesis

A total synthesis of the compound was reported in 1965; that procedure called for the intramolecular alkyl lithium attack on a starting carbonyl to form the fused rings, and a tosyl chloride mediated formation of the ether.[37][third-party source needed]

Biosynthesis

Biosynthesis of THCA

In the Cannabis plant, THC occurs mainly as tetrahydrocannabinolic acid (THCA, 2-COOH-THC, THC-COOH). Geranyl pyrophosphate and olivetolic acid react, catalysed by an enzyme to produce cannabigerolic acid,[38] which is cyclized by the enzyme THC acid synthase to give THCA. Over time, or when heated, THCA is decarboxylated, producing THC. The pathway for THCA biosynthesis is similar to that which produces the bitter acid humulone in hops.[39][40]

Detection in body fluids

THC, 11-OH-THC and THC-COOH can be detected and quantified in blood, urine, hair, oral fluid or sweat using a combination of immunoassay and chromatographic techniques as part of a drug use testing program or in a forensic investigation.[41][42][43]

History

THC was first isolated in 1964 by Raphael Mechoulam and Yechiel Gaoni at the Weizmann Institute of Science.[33][44][45]
Since at least 1986, the trend has been for THC in general, and especially the Marinol preparation, to be downgraded to less and less stringently-controlled schedules of controlled substances, in the U.S. and throughout the rest of the world.[citation needed]

On May 13, 1986, the Drug Enforcement Administration (DEA) issued a Final Rule and Statement of Policy authorizing the "Rescheduling of Synthetic Dronabinol in Sesame Oil and Encapsulated in Soft Gelatin Capsules From Schedule I to Schedule II" (DEA 51 FR 17476-78). This permitted medical use of Marinol, albeit with the severe restrictions associated with Schedule II status.[46] For instance, refills of Marinol prescriptions were not permitted. At its 10th meeting, on April 29, 1991, the Commission on Narcotic Drugs, in accordance with article 2, paragraphs 5 and 6, of the Convention on Psychotropic Substances, decided that Δ⁹-tetrahydrocannabinol (also referred to as Δ⁹-THC) and its stereochemical variants should be transferred from Schedule I to Schedule II of that Convention. This released Marinol from the restrictions imposed by Article 7 of the Convention (See also United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances).[citation needed]

An article published in the April–June 1998 issue of the Journal of Psychoactive Drugs found that "Healthcare professionals have detected no indication of scrip-chasing or doctor-shopping among the patients for whom they have prescribed dronabinol". The authors state that Marinol has a low potential for abuse.[47]

In 1999, Marinol was rescheduled from Schedule II to III of the Controlled Substances Act, reflecting a finding that THC had a potential for abuse less than that of cocaine and heroin. This rescheduling constituted part of the argument for a 2002 petition for removal of cannabis from Schedule I of the Controlled Substances Act, in which petitioner Jon Gettman noted, "Cannabis is a natural source of dronabinol (THC), the ingredient of Marinol, a Schedule III drug. There are no grounds to schedule cannabis in a more restrictive schedule than Marinol".[48]

At its 33rd meeting, in 2003, the World Health Organization Expert Committee on Drug Dependence recommended transferring THC to Schedule IV of the Convention, citing its medical uses and low abuse potential.[49]

Society and culture

Brand names


Dronabinol is marketed as Marinol,[50] a registered trademark of Solvay Pharmaceuticals. Dronabinol is also marketed, sold, and distributed by PAR Pharmaceutical Companies under the terms of a license and distribution agreement with SVC pharma LP, an affiliate of Rhodes Technologies.[citation needed] Dronabinol is available as a prescription drug (under Marinol[51]) in several countries including the United States, Germany, South Africa and Australia.[52] In the United States, Marinol is a Schedule III drug, available by prescription, considered to be non-narcotic and to have a low risk of physical or mental dependence. Efforts to get cannabis rescheduled as analogous to Marinol have not succeeded thus far, though a 2002 petition has been accepted by the DEA. As a result of the rescheduling of Marinol from Schedule II to Schedule III, refills are now permitted for this substance. Marinol's U.S. Food and Drug Administration (FDA) approvals for medical use has raised much controversy[53] as to why natural THC is considered a schedule I drug.[54]

Comparisons with medical cannabis

Female cannabis plants contain at least 113 cannabinoids,[55] including cannabidiol (CBD), thought to be the major anticonvulsant that helps people with multiple sclerosis;[56] and cannabichromene (CBC), an anti-inflammatory which may contribute to the pain-killing effect of cannabis.[57]

It takes over one hour for Marinol to reach full systemic effect,[58] compared to seconds or minutes for smoked or vaporized cannabis.[59] Some people accustomed to inhaling just enough cannabis smoke to manage symptoms have complained of too-intense intoxication from Marinol's predetermined dosages[citation needed]. Many people using Marinol have said that Marinol produces a more acute psychedelic effect than cannabis, and it has been speculated that this disparity can be explained by the moderating effect of the many non-THC cannabinoids present in cannabis.[citation needed] For that reason, alternative THC-containing medications based on botanical extracts of the cannabis plant such as nabiximols are being developed. Mark Kleiman, director of the Drug Policy Analysis Program at UCLA's School of Public Affairs said of Marinol, "It wasn't any fun and made the user feel bad, so it could be approved without any fear that it would penetrate the recreational market, and then used as a club with which to beat back the advocates of whole cannabis as a medicine."[60] Mr. Kleiman's opinion notwithstanding, clinical trials comparing the use of cannabis extracts with Marinol in the treatment of cancer cachexia have demonstrated equal efficacy and well-being among subjects in the two treatment arms.[61] United States federal law currently registers dronabinol as a Schedule III controlled substance, but all other cannabinoids remain Schedule I, except synthetics like nabilone.[62]

Research

Its status as an illegal drug in most countries can make research difficult; for instance in the United States where the National Institute on Drug Abuse was the only legal source of cannabis for researchers until it recently became legalized in Colorado, Washington state, Oregon, Alaska, California, Massachusets and Washington D.C.[63]

In April 2014 the American Academy of Neurology published a systematic review of the efficacy and safety of medical marijuana and marijuana-derived products in certain neurological disorders.[21] The review identified 34 studies meeting inclusion criteria, of which 8 were rated as Class I quality.[21] The study found evidence supporting the effectiveness of the cannabis extracts that were tested and THC in treating certain symptoms of multiple sclerosis, but found insufficient evidence to determine the effectiveness of the tested cannabis products in treating several other neurological diseases.[21]

Several of the clinical trials exploring the safety and efficacy of "oral cannabis extract" that were reviewed by the AAN were conducted using "Cannador", made by the Institute for Clinical Research (IKF) in Berlin,[64] which is a capsule with a standardized Cannabis sativa extract; the cannabis grown in Switzerland and processed in Germany.[65]:88 Each capsule of Cannador contains 2.5 mg Δ⁹- tetrahydrocannabinol and cannabidiols are standardized to a range 0.8–1.8 mg.[66]

Multiple sclerosis symptoms

  • Spasticity. Based on the results of 3 high quality trials and 5 of lower quality, oral cannabis extract was rated as effective, and THC as probably effective, for improving people's subjective experience of spasticity. Oral cannabis extract and THC both were rated as possibly effective for improving objective measures of spasticity.[21]
  • Centrally mediated pain and painful spasms. Based on the results of 4 high quality trials and 4 low quality trials, oral cannabis extract was rated as effective, and THC as probably effective in treating central pain and painful spasms.[21]
  • Bladder dysfunction. Based on a single high quality study, oral cannabis extract and THC were rated as probably ineffective for controlling bladder complaints in multiple sclerosis[21]

Neurodegenerative disorders

  • Huntington disease. No reliable conclusions could be drawn regarding the effectiveness of THC or oral cannabis extract in treating the symptoms of Huntington disease as the available trials were too small to reliably detect any difference[21]
  • Parkinson disease. Based on a single study, oral cannabis extract was rated probably ineffective in treating levodopa-induced dyskinesia in Parkinson disease.[21]
  • Alzheimer's disease. A 2011 Cochrane Review found insufficient evidence to conclude whether cannabis products have any utility in the treatment of Alzheimer's disease.[67]

Other neurological disorders

  • Tourette syndrome. The available data was determined to be insufficient to allow reliable conclusions to be drawn regarding the effectiveness of oral cannabis extract or THC in controlling tics.[21]
  • Cervical dystonia. Insufficient data was available to assess the effectiveness of oral cannabis extract of THC in treating cervical dystonia.[21]
  • Epilepsy. Data was considered insufficient to judge the utility of cannabis products in reducing seizure frequency or severity.[21]

Saturday, April 29, 2017

Paradigm

From Wikipedia, the free encyclopedia

In science and philosophy, a paradigm /ˈpærədm/ is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitutes legitimate contributions to a field.

Etymology

Paradigm comes from Greek παράδειγμα (paradeigma), "pattern, example, sample"[1] from the verb παραδείκνυμι (paradeiknumi), "exhibit, represent, expose"[2] and that from παρά (para), "beside, beyond"[3] and δείκνυμι (deiknumi), "to show, to point out".[4]
In rhetoric, paradeigma is known as a type of proof. The purpose of paradeigma is to provide an audience with an illustration of similar occurrences. This illustration is not meant to take the audience to a conclusion, however it is used to help guide them there. A personal accountant is a good comparison of paradeigma to explain how it is meant to guide the audience. It is not the job of a personal accountant to tell their client exactly what (and what not) to spend their money on, but to aid in guiding their client as to how money should be spent based on their financial goals. Anaximenes defined paradeigma as, "actions that have occurred previously and are similar to, or the opposite of, those which we are now discussing."[5]

The original Greek term παράδειγμα (paradeigma) was used in Greek texts such as Plato's Timaeus (28A) as the model or the pattern that the Demiurge (god) used to create the cosmos. The term had a technical meaning in the field of grammar: the 1900 Merriam-Webster dictionary defines its technical use only in the context of grammar or, in rhetoric, as a term for an illustrative parable or fable. In linguistics, Ferdinand de Saussure used paradigm to refer to a class of elements with similarities.

The Merriam-Webster Online dictionary defines this usage as "a philosophical and theoretical framework of a scientific school or discipline within which theories, laws, and generalizations and the experiments performed in support of them are formulated; broadly: a philosophical or theoretical framework of any kind."[6]

The Oxford Dictionary of Philosophy attributes the following description of the term to Thomas Kuhn's The Structure of Scientific Revolutions:
Kuhn suggests that certain scientific works, such as Newton's Principia or John Dalton's New System of Chemical Philosophy (1808), provide an open-ended resource: a framework of concepts, results, and procedures within which subsequent work is structured. Normal science proceeds within such a framework or paradigm. A paradigm does not impose a rigid or mechanical approach, but can be taken more or less creatively and flexibly.[7]

Scientific paradigm

The Oxford English Dictionary defines the basic meaning of the term paradigm as "a typical example or pattern of something; a pattern or model".[8] The historian of science Thomas Kuhn gave it its contemporary meaning when he adopted the word to refer to the set of concepts and practices that define a scientific discipline at any particular period of time. In his book The Structure of Scientific Revolutions (first published in 1962), Kuhn defines a scientific paradigm as: "universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of practitioners,[9] i.e.,
  • what is to be observed and scrutinized
  • the kind of questions that are supposed to be asked and probed for answers in relation to this subject
  • how these questions are to be structured
  • what predictions made by the primary theory within the discipline
  • how the results of scientific investigations should be interpreted
  • how an experiment is to be conducted, and what equipment is available to conduct the experiment.
In The Structure of Scientific Revolutions, Kuhn saw the sciences as going through alternating periods of normal science, when an existing model of reality dominates a protracted period of puzzle-solving, and revolution, when the model of reality itself undergoes sudden drastic change. Paradigms have two aspects. Firstly, within normal science, the term refers to the set of exemplary experiments that are likely to be copied or emulated. Secondly, underpinning this set of exemplars are shared preconceptions, made prior to – and conditioning – the collection of evidence.[10] These preconceptions embody both hidden assumptions and elements that he describes as quasi-metaphysical;[11] the interpretations of the paradigm may vary among individual scientists.[12]

Kuhn was at pains to point out that the rationale for the choice of exemplars is a specific way of viewing reality: that view and the status of "exemplar" are mutually reinforcing. For well-integrated members of a particular discipline, its paradigm is so convincing that it normally renders even the possibility of alternatives unconvincing and counter-intuitive. Such a paradigm is opaque, appearing to be a direct view of the bedrock of reality itself, and obscuring the possibility that there might be other, alternative imageries hidden behind it. The conviction that the current paradigm is reality tends to disqualify evidence that might undermine the paradigm itself; this in turn leads to a build-up of unreconciled anomalies. It is the latter that is responsible for the eventual revolutionary overthrow of the incumbent paradigm, and its replacement by a new one. Kuhn used the expression paradigm shift (see below) for this process, and likened it to the perceptual change that occurs when our interpretation of an ambiguous image "flips over" from one state to another.[13] (The rabbit-duck illusion is an example: it is not possible to see both the rabbit and the duck simultaneously.) This is significant in relation to the issue of incommensurability (see below).

An example of a currently accepted paradigm would be the standard model of physics. The scientific method allows for orthodox scientific investigations into phenomena that might contradict or disprove the standard model; however grant funding would be proportionately more difficult to obtain for such experiments, depending on the degree of deviation from the accepted standard model theory the experiment would test for. To illustrate the point, an experiment to test for the mass of neutrinos or the decay of protons (small departures from the model) is more likely to receive money than experiments that look for the violation of the conservation of momentum, or ways to engineer reverse time travel.

Mechanisms similar to the original Kuhnian paradigm have been invoked in various disciplines other than the philosophy of science. These include: the idea of major cultural themes,[14][15] worldviews (and see below), ideologies, and mindsets. They have somewhat similar meanings that apply to smaller and larger scale examples of disciplined thought. In addition, Michel Foucault used the terms episteme and discourse, mathesis and taxinomia, for aspects of a "paradigm" in Kuhn's original sense.

Paradigm shifts

In The Structure of Scientific Revolutions, Kuhn wrote that "the successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science." (p. 12)
Paradigm shifts tend to appear in response to the accumulation of critical anomalies as well as the proposal of a new theory with the power to encompass both older relevant data and explain relevant anomalies. New paradigms tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, a statement generally attributed to physicist Lord Kelvin famously claimed, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement."[16] Five years later, Albert Einstein published his paper on special relativity, which challenged the set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years. In this case, the new paradigm reduces the old to a special case in the sense that Newtonian mechanics is still a good model for approximation for speeds that are slow compared to the speed of light. Many philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. Kuhn's original model is now generally seen as too limited[citation needed].

Kuhn's idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it may be that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognize such a paradigm shift. Being in the social sciences, people can still use earlier ideas to discuss the history of science.

Paradigm paralysis

Perhaps the greatest barrier to a paradigm shift, in some cases, is the reality of paradigm paralysis: the inability or refusal to see beyond the current models of thinking.[17] This is similar to what psychologists term Confirmation bias. Examples include rejection of Aristarchus of Samos', Copernicus', and Galileo's theory of a heliocentric solar system, the discovery of electrostatic photography, xerography and the quartz clock.[citation needed]

Incommensurability

Kuhn pointed out that it could be difficult to assess whether a particular paradigm shift had actually led to progress, in the sense of explaining more facts, explaining more important facts, or providing better explanations, because the understanding of "more important", "better", etc. changed with the paradigm. The two versions of reality are thus incommensurable. Kuhn's version of incommensurability has an important psychological dimension; this is apparent from his analogy between a paradigm shift and the flip-over involved in some optical illusions.[18] However, he subsequently diluted his commitment to incommensurability considerably, partly in the light of other studies of scientific development that did not involve revolutionary change.[19] One of the examples of incommensurability that Kuhn used was the change in the style of chemical investigations that followed the work of Lavoisier on atomic theory in the late 18th Century.[13] In this change, the focus had shifted from the bulk properties of matter (such as hardness, colour, reactivity, etc.) to studies of atomic weights and quantitative studies of reactions. He suggested that it was impossible to make the comparison needed to judge which body of knowledge was better or more advanced. However, this change in research style (and paradigm) eventually (after more than a century) led to a theory of atomic structure that accounts well for the bulk properties of matter; see, for example, Brady's General Chemistry.[20] According to P J Smith, this ability of science to back off, move sideways, and then advance is characteristic of the natural sciences,[21] but contrasts with the position in some social sciences, notably economics.[22]

This apparent ability does not guarantee that the account is veridical at any one time, of course, and most modern philosophers of science are fallibilists. However, members of other disciplines do see the issue of incommensurability as a much greater obstacle to evaluations of "progress"; see, for example, Martin Slattery's Key Ideas in Sociology.[23][24]

Subsequent developments

Opaque Kuhnian paradigms and paradigm shifts do exist. A few years after the discovery of the mirror-neurons that provide a hard-wired basis for the human capacity for empathy, the scientists involved were unable to identify the incidents that had directed their attention to the issue. Over the course of the investigation, their language and metaphors had changed so that they themselves could no longer interpret all of their own earlier laboratory notes and records.[25]

Imre Lakatos and research programmes

However, many instances exist in which change in a discipline's core model of reality has happened in a more evolutionary manner, with individual scientists exploring the usefulness of alternatives in a way that would not be possible if they were constrained by a paradigm. Imre Lakatos suggested (as an alternative to Kuhn's formulation) that scientists actually work within research programmes.[26] In Lakatos' sense, a research programme is a sequence of problems, placed in order of priority. This set of priorities, and the associated set of preferred techniques, is the positive heuristic of a programme. Each programme also has a negative heuristic; this consists of a set of fundamental assumptions that – temporarily, at least – takes priority over observational evidence when the two appear to conflict.

This latter aspect of research programmes is inherited from Kuhn's work on paradigms,[citation needed] and represents an important departure from the elementary account of how science works. According to this, science proceeds through repeated cycles of observation, induction, hypothesis-testing, etc., with the test of consistency with empirical evidence being imposed at each stage. Paradigms and research programmes allow anomalies to be set aside, where there is reason to believe that they arise from incomplete knowledge (about either the substantive topic, or some aspect of the theories implicitly used in making observations.

Larry Laudan: Dormant anomalies, fading credibility, and research traditions

Larry Laudan[27] has also made two important contributions to the debate. Laudan believed that something akin to paradigms exist in the social sciences (Kuhn had contested this, see below); he referred to these as research traditions. Laudan noted that some anomalies become "dormant", if they survive a long period during which no competing alternative has shown itself capable of resolving the anomaly. He also presented cases in which a dominant paradigm had withered away because its lost credibility when viewed against changes in the wider intellectual milieu.

In social sciences

Kuhn himself did not consider the concept of paradigm as appropriate for the social sciences. He explains in his preface to The Structure of Scientific Revolutions that he developed the concept of paradigm precisely to distinguish the social from the natural sciences. While visiting the Center for Advanced Study in the Behavioral Sciences in 1958 and 1959, surrounded by social scientists, he observed that they were never in agreement about the nature of legitimate scientific problems and methods. He explains that he wrote this book precisely to show that there can never be any paradigms in the social sciences. Mattei Dogan, a French sociologist, in his article "Paradigms in the Social Sciences," develops Kuhn's original thesis that there are no paradigms at all in the social sciences since the concepts are polysemic, involving the deliberate mutual ignorance between scholars and the proliferation of schools in these disciplines. Dogan provides many examples of the non-existence of paradigms in the social sciences in his essay, particularly in sociology, political science and political anthropology.

However, both Kuhn's original work and Dogan's commentary are directed at disciplines that are defined by conventional labels (such as well as "sociology"). While it is true that such broad groupings in the social sciences are usually not based on a Kuhnian paradigm, each of the competing sub-disciplines may still be underpinned by a paradigm, research programme, research tradition, and/ or professional imagery. These structures will be motivating research, providing it with an agenda, defining what is and is not anomalous evidence, and inhibiting debate with other groups that fall under the same broad disciplinary label. (A good example is provided by the contrast between Skinnerian behaviourism and Personal Construct Theory (PCT) within psychology. The most significant of the many ways these two sub-disciplines of psychology differ concerns meanings and intentions. In PCT, they are seen as the central concern of psychology; in behaviourism, they are not scientific evidence at all, as they cannot be directly observed.)

Such considerations explain the conflict between the Kuhn/ Dogan view, and the views of others (including Larry Laudan, see above), who do apply these concepts to social sciences.

Handa,[28] M.L. (1986) introduced the idea of "social paradigm" in the context of social sciences. He identified the basic components of a social paradigm. Like Kuhn, Handa addressed the issue of changing paradigm; the process popularly known as "paradigm shift". In this respect, he focused on social circumstances that precipitate such a shift and the effects of the shift on social institutions, including the institution of education. This broad shift in the social arena, in turn, changes the way the individual perceives reality.

Another use of the word paradigm is in the sense of "worldview". For example, in social science, the term is used to describe the set of experiences, beliefs and values that affect the way an individual perceives reality and responds to that perception. Social scientists have adopted the Kuhnian phrase "paradigm shift" to denote a change in how a given society goes about organizing and understanding reality. A "dominant paradigm" refers to the values, or system of thought, in a society that are most standard and widely held at a given time. Dominant paradigms are shaped both by the community's cultural background and by the context of the historical moment. Hutchin [29] outlines some conditions that facilitate a system of thought to become an accepted dominant paradigm:
  • Professional organizations that give legitimacy to the paradigm
  • Dynamic leaders who introduce and purport the paradigm
  • Journals and editors who write about the system of thought. They both disseminate the information essential to the paradigm and give the paradigm legitimacy
  • Government agencies who give credence to the paradigm
  • Educators who propagate the paradigm's ideas by teaching it to students
  • Conferences conducted that are devoted to discussing ideas central to the paradigm
  • Media coverage
  • Lay groups, or groups based around the concerns of lay persons, that embrace the beliefs central to the paradigm
  • Sources of funding to further research on the paradigm

Other uses

The word paradigm is also still used to indicate a pattern or model or an outstandingly clear or typical example or archetype. The term is frequently used in this sense in the design professions. Design Paradigms or archetypes comprise functional precedents for design solutions. The best known references on design paradigms are Design Paradigms: A Sourcebook for Creative Visualization, by Wake, and Design Paradigms by Petroski.

This term is also used in cybernetics. Here it means (in a very wide sense) a (conceptual) protoprogram for reducing the chaotic mass to some form of order. Note the similarities to the concept of entropy in chemistry and physics. A paradigm there would be a sort of prohibition to proceed with any action that would increase the total entropy of the system. To create a paradigm requires a closed system that accepts changes. Thus a paradigm can only apply to a system that is not in its final stage.

Beyond its use in the physical and social sciences, Kuhn's paradigm concept has been analysed in relation to its applicability in identifying 'paradigms' with respect to worldviews at specific points in history. One example is Matthew Edward Harris' book The Notion of Papal Monarchy in the Thirteenth Century: The Idea of Paradigm in Church History.[30] Harris stresses the primarily sociological importance of paradigms, pointing towards Kuhn's second edition of The Structure of Scientific Revolutions. Although obedience to popes such as Innocent III and Boniface VIII was widespread, even written testimony from the time showing loyalty to the pope does not demonstrate that the writer had the same worldview as the Church, and therefore pope, at the centre. The difference between paradigms in the physical sciences and in historical organisations such as the Church is that the former, unlike the latter, requires technical expertise rather than repeating statements. In other words, after scientific training through what Kuhn calls 'exemplars', one could not genuinely believe that, to take a trivial example, the earth is flat, whereas thinkers such as Giles of Rome in the thirteenth century wrote in favour of the pope, then could easily write similarly glowing things about the king. A writer such as Giles would have wanted a good job from the pope; he was a papal publicist. However, Harris writes that 'scientific group membership is not concerned with desire, emotions, gain, loss and any idealistic notions concerning the nature and destiny of humankind...but simply to do with aptitude, explanation, [and] cold description of the facts of the world and the universe from within a paradigm'.[31]

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...