Search This Blog

Monday, December 8, 2025

Linear no-threshold model

From Wikipedia, the free encyclopedia
Different assumptions on the extrapolation of the cancer risk vs. radiation dose to low-dose levels, given a known risk at a high dose:
(A) supra-linearity, (B) linear
(C) linear-quadratic, (D) hormesis

The linear no-threshold model (LNT) is a dose-response model used in radiation protection to estimate stochastic health effects such as radiation-induced cancer, genetic mutations and teratogenic effects on the human body due to exposure to ionizing radiation. The model assumes a linear relationship between dose and health effects, even for very low doses where biological effects are more difficult to observe. The LNT model implies that all exposure to ionizing radiation is harmful, regardless of how low the dose is, and that the effect is cumulative over a lifetime.

The LNT model is commonly used by regulatory bodies as a basis for formulating public health policies that set regulatory dose limits to protect against the effects of radiation. The validity of the LNT model, however, is disputed, and other models exist: the threshold model, which assumes that very small exposures are harmless, the radiation hormesis model, which says that radiation at very small doses can be beneficial, and the supra-linear model. It has been argued that the LNT model may have created an irrational fear of radiation.

Scientific organizations and government regulatory bodies generally support the use of the LNT model, particularly for optimization. However, some caution against estimating health effects from doses below a certain level (see § Controversy).

Introduction

Stochastic health effects are those that occur by chance, and whose probability is proportional to the dose, but whose severity is independent of the dose. The LNT model assumes there is no lower threshold at which stochastic effects start, and assumes a linear relationship between dose and the stochastic health risk. In other words, LNT assumes that radiation has the potential to cause harm at any dose level, however small, and the sum of several very small exposures is just as likely to cause a stochastic health effect as a single larger exposure of equal dose value. In contrast, deterministic health effects are radiation-induced effects such as acute radiation syndrome, which are caused by tissue damage. Deterministic effects reliably occur above a threshold dose and their severity increases with dose. Because of the inherent differences, LNT is not a model for deterministic effects, which are instead characterized by other types of dose-response relationships.

LNT is a common model to calculate the probability of radiation-induced cancer both at high doses where epidemiology studies support its application, but controversially, also at low doses, which is a dose region that has a lower predictive statistical confidence. Nonetheless, regulatory bodies, such as the Nuclear Regulatory Commission (NRC), commonly use LNT as a basis for regulatory dose limits to protect against stochastic health effects, as found in many public health policies. Whether the LNT model describes the reality for small-dose exposures is disputed, and challenges to the LNT model used by NRC for setting radiation protection regulations were submitted. NRC rejected the petitions in 2021 because "they fail to present an adequate basis supporting the request to discontinue use of the LNT model".

Other dose models include: the threshold model, which assumes that very small exposures are harmless, and the radiation hormesis model, which claims that radiation at very small doses can be beneficial. Because the current data is inconclusive, scientists disagree on which model should be used, though most national and international cancer research organizations explicitly endorse LNT for regulating exposures to low dose radiation. The model is sometimes used to quantify the cancerous effect of collective doses of low-level radioactive contaminations, which is controversial. Such practice has been criticized by the International Commission on Radiological Protection since 2007.

Origins

Increased Risk of Solid Cancer with Dose for A-bomb survivors, from BEIR report. Notably, this exposure pathway occurred from essentially a massive spike or pulse of radiation, a result of the brief instant that the bomb exploded, which while somewhat similar to the environment of a CT scan, is wholly unlike the low dose rate of living in a contaminated area such as Chernobyl, where the dose rate is orders of magnitude smaller. LNT does not consider dose rate and is a one size fits all approach based solely on total absorbed dose, which has not yet been verified in other settings. Likewise, it has also been pointed out that bomb survivors inhaled carcinogenic benzopyrene from the burning cities, yet this is not factored in.

The association of exposure to radiation with cancer had been observed as early as 1902, six years after the discovery of X-rays by Wilhelm Röntgen and radioactivity by Henri Becquerel. In 1927, Hermann Muller demonstrated that radiation may cause genetic mutation. He also suggested mutation as a cause of cancer. Gilbert N. Lewis and Alex Olson, based on Muller's discovery of the effect of radiation on mutation, proposed a mechanism for biological evolution in 1928, suggesting that genomic mutation was induced by cosmic and terrestrial radiation and first introduced the idea that such mutation may occur proportionally to the dose of radiation. Various laboratories, including Muller's, then demonstrated the apparent linear dose response of mutation frequency. Muller, who received a Nobel Prize for his work on the mutagenic effect of radiation in 1946, asserted in his Nobel lecture, The Production of Mutation, that mutation frequency is "directly and simply proportional to the dose of irradiation applied" and that there is "no threshold dose".

The early studies were based on higher levels of radiation that made it hard to establish the safety of low level of radiation. Indeed, many early scientists believed that there may be a tolerance level, and that low doses of radiation may not be harmful. A later study in 1955 on mice exposed to low dose of radiation suggests that they may outlive control animals. The interest in the effects of radiation intensified after the dropping of atomic bombs on Hiroshima and Nagasaki, and studies were conducted on the survivors. Although compelling evidence on the effect of low dosage of radiation was hard to come by, by the late 1940s, the idea of LNT became more popular due to its mathematical simplicity. In 1954, the National Council on Radiation Protection and Measurements (NCRP) introduced the concept of maximum permissible dose. In 1958, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) assessed the LNT model and a threshold model, but noted the difficulty in acquiring "reliable information about the correlation between small doses and their effects either in individuals or in large populations". The United States Congress Joint Committee on Atomic Energy (JCAE) similarly could not establish if there is a threshold or "safe" level for exposure; nevertheless, it introduced the concept of "As Low As Reasonably Achievable" (ALARA). ALARA would become a fundamental principle in radiation protection policy that implicitly accepts the validity of LNT. In 1959, the United States Federal Radiation Council (FRC) supported the concept of the LNT extrapolation down to the low dose region in its first report.

By the 1970s, the LNT model had become accepted as the standard in radiation protection practice by a number of bodies. In 1972, the first report of National Academy of Sciences (NAS) Biological Effects of Ionizing Radiation (BEIR), an expert panel who reviewed available peer reviewed literature, supported the LNT model on pragmatic grounds, noting that while "dose-effect relationship for x rays and gamma rays may not be a linear function", the "use of linear extrapolation ... may be justified on pragmatic grounds as a basis for risk estimation." In its seventh report of 2006, NAS BEIR VII writes, "the committee concludes that the preponderance of information indicates that there will be some risk, even at low doses".

The Health Physics Society (in the United States) has published a documentary series on the origins of the LNT model.

Radiation precautions and public policy

Radiation precautions have led to sunlight being listed as a carcinogen at all sun exposure rates, due to the ultraviolet component of sunlight, with no safe level of sunlight exposure being suggested, following the precautionary LNT model. According to a 2007 study submitted by the University of Ottawa to the Department of Health and Human Services in Washington, D.C., there is not enough information to determine a safe level of sun exposure.

The linear no-threshold model is used to extrapolate the expected number of extra deaths caused by exposure to environmental radiation, and it therefore has a great impact on public policy. The model is used to translate any radiation release, into a number of lives lost, while any reduction in radiation exposure, for example as a consequence of radon detection, is translated into a number of lives saved. When the doses are very low the model predicts new cancers only in a very small fraction of the population, but for a large population, the number of lives is extrapolated into hundreds or thousands.

A linear model has long been used in health physics to set maximum acceptable radiation exposures.

In 2025, Donald Trump issued an executive order that proposed implementing "determinate radiation limits" to replace the linear no-threshold model and ALARA principle. These changes were proposed to ease the licensing requirements on new nuclear power plants in the United States.

Controversy

The LNT model has been contested by a number of scientists. It has been claimed that the early proponent of the model Hermann Joseph Muller intentionally ignored an early study that did not support the LNT model when he gave his 1946 Nobel Prize Lecture advocating the model.

In very high dose radiation therapy, it was known at the time that radiation can cause a physiological increase in the rate of pregnancy anomalies; however, human exposure data and animal testing suggests that the "malformation of organs appears to be a deterministic effect with a threshold dose", below which no rate increase is observed. A review in 1999 on the link between the Chernobyl accident and teratology (birth defects) concludes that "there is no substantive proof regarding radiation‐induced teratogenic effects from the Chernobyl accident". It is argued that the human body has defense mechanisms, such as DNA repair and programmed cell death, that would protect it against carcinogenesis due to low-dose exposures of carcinogens. However, these repair mechanisms are known to be error prone.

A 2011 research of the cellular repair mechanisms support the evidence against the linear no-threshold model. According to its authors, this study published in the Proceedings of the National Academy of Sciences of the United States of America "casts considerable doubt on the general assumption that risk to ionizing radiation is proportional to dose".

A 2011 review of studies addressing childhood leukaemia following exposure to ionizing radiation, including both diagnostic exposure and natural background exposure from radon, concluded that existing risk factors, excess relative risk per sievert (ERR/Sv), is "broadly applicable" to low dose or low dose-rate exposure, "although the uncertainties associated with this estimate are considerable". The study also notes that "epidemiological studies have been unable, in general, to detect the influence of natural background radiation upon the risk of childhood leukaemia"

Many expert scientific panels have been convened on the risks of ionizing radiation. Most explicitly support the LNT model and none have concluded that evidence exists for a threshold, with the exception of the French Academy of Sciences in a 2005 report. Considering the uncertainty of health effects at low doses, several organizations caution against estimating health effects below certain doses, generally below natural background, as noted below:

  • The US Nuclear Regulatory Commission upheld the LNT model in 2021 as a "sound regulatory basis for minimizing the risk of unnecessary radiation exposure to both members of the public and radiation workers" following challenges to the dose limit requirements contained in its regulations.

    Based upon the current state of science, the NRC concludes that the actual level of risk associated with low doses of radiation remains uncertain and some studies, such as the INWORKS study, show there is at least some risk from low doses of radiation. Moreover, the current state of science does not provide compelling evidence of a threshold, as highlighted by the fact that no national or international authoritative scientific advisory bodies have concluded that such evidence exists. Therefore, based upon the stated positions of the aforementioned advisory bodies; the comments and recommendations of NCI, NIOSH, and the EPA; the October 28, 2015, recommendation of the ACMUI; and its own professional and technical judgment, the NRC has determined that the LNT model continues to provide a sound regulatory basis for minimizing the risk of unnecessary radiation exposure to both members of the public and occupational workers. Consequently, the NRC will retain the dose limits for occupational workers and members of the public in 10 CFR part 20 radiation protection regulations.

  • In 2005 the United States National Academies' National Research Council published its comprehensive meta-analysis of low-dose radiation research BEIR VII, Phase 2. In its press release the Academies stated:

    The scientific research base shows that there is no threshold of exposure below which low levels of ionizing radiation can be demonstrated to be harmless or beneficial.

  • In a 2005 report, the International Commission on Radiological Protection stated: "The report concludes that while existence of a low-dose threshold does not seem to be unlikely for radiation-related cancers of certain tissues, the evidence does not favour the existence of a universal threshold. The LNT hypothesis, combined with an uncertain DDREF for extrapolation from high doses, remains a prudent basis for radiation protection at low doses and low dose rates." In a 2007 report, ICRP noted that collective dose is effective for optimization, but aggregation of very low doses to estimate excess cancers is inappropriate because of large uncertainties.
  • The National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress), in a 2018 report, "concludes that the recent epidemiological studies support the continued use of LNT model (with the steepness of the dose-response slope perhaps reduced by a DDREF factor) for radiation protection. This is in accord with judgments by other national and international scientific committees, based on somewhat older data, that no alternative dose-response relationship appears more pragmatic or prudent for radiation protection purposes than the LNT model."
  • The United States Environmental Protection Agency endorses the LNT model in its 2011 report on radiogenic cancer risk:

    Underlying the risk models is a large body of epidemiological and radiobiological data. In general, results from both lines of research are consistent with a linear, no-threshold dose (LNT) response model in which the risk of inducing a cancer in an irradiated tissue by low doses of radiation is proportional to the dose to that tissue

  • UNSCEAR stated in Appendix C of its 2020/2021 report:

    The Committee concluded that there remains good justification for the use of a non-threshold model for risk inference given the robust knowledge on the role of mutation and chromosomal aberrations in carcinogenesis. That said, there are ways that radiation could act that might lead to a re-evaluation of the use of a linear dose-response model to infer radiation cancer risks.

A number of organisations caution against using the Linear no-threshold model to estimate risk from radiation exposure below a certain level:

  • The French Academy of Sciences (Académie des sciences) and the National Academy of Medicine (Académie nationale de médecine) published a report in 2005 (at the same time as BEIR VII report in the United States) that rejected the linear no-threshold model in favor of a threshold dose response and a significantly reduced risk at low radiation exposure:

    In conclusion, this report raises doubts on the validity of using LNT for evaluating the carcinogenic risk of low doses (< 100 mSv) and even more for very low doses (< 10 mSv). The LNT concept can be a useful pragmatic tool for assessing rules in radioprotection for doses above 10 mSv; however since it is not based on biological concepts of our current knowledge, it should not be used without precaution for assessing by extrapolation the risks associated with low and even more so, with very low doses (< 10 mSv), especially for benefit-risk assessments imposed on radiologists by the European directive 97-43.

  • The Health Physics Society's position statement first adopted in January 1996, last revised in February 2019, states:

    The Health Physics Society advises against estimating health risks to people from exposures to ionizing radiation that are near or less than natural background levels because statistical uncertainties at these low levels are great.

  • The American Nuclear Society states that the LNT model may not adequately describe the relationship between harm and exposure and notes the recommendation in ICRP-103 "that the LNT model not be used for estimating the health effects of trivial exposures received by large populations over long periods of time…" It further recommends additional research.
  • UNSCEAR stated in its 2012 report:

    The Scientific Committee does not recommend multiplying very low doses by large numbers of individuals to estimate numbers of radiation-induced health effects within a population exposed to incremental doses at levels equivalent to or lower than natural background levels.

Mental health effects

It has been argued that the LNT model had caused an irrational fear of radiation, whose observable effects are much more significant than non-observable effects postulated by LNT. In the wake of the 1986 Chernobyl accident in Ukraine, Europe-wide anxieties were fomented in pregnant mothers over the perception enforced by the LNT model that their children would be born with a higher rate of mutations. As far afield as the country of Switzerland, hundreds of excess induced abortions were performed on the healthy unborn, out of this no-threshold fear. Following the accident however, studies of data sets approaching a million births in the EUROCAT database, divided into "exposed" and control groups were assessed in 1999. As no Chernobyl impacts were detected, the researchers conclude "in retrospect the widespread fear in the population about the possible effects of exposure on the unborn was not justified". Despite studies from Germany and Turkey, the only robust evidence of negative pregnancy outcomes that transpired after the accident were these elective abortion indirect effects, in Greece, Denmark, Italy etc., due to the anxieties created.

The consequences of low-level radiation are often more psychological than radiological. Because damage from very-low-level radiation cannot be detected, people exposed to it are left in anguished uncertainty about what will happen to them. Many believe they have been fundamentally contaminated for life and may refuse to have children for fear of birth defects. They may be shunned by others in their community who fear a sort of mysterious contagion.

Forced evacuation from a radiation or nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, or suicide. Such was the outcome of the 1986 Chernobyl nuclear disaster in Ukraine. A comprehensive 2005 study concluded that "the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date". Frank N. von Hippel, a U.S. scientist, commented on the 2011 Fukushima nuclear disaster, saying that "fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas".

Such great psychological danger does not accompany other materials that put people at risk of cancer and other deadly illness. Visceral fear is not widely aroused by, for example, the daily emissions from coal burning, although as a National Academy of Sciences study found, this causes 10,000 premature deaths a year in the US. It is "only nuclear radiation that bears a huge psychological burden – for it carries a unique historical legacy".

Thalidomide scandal

From Wikipedia, the free encyclopedia
Feet of a baby born to a mother who had taken thalidomide while pregnant

In the late 1950s and early 1960s, thalidomide was prescribed to women in 46 countries who were pregnant or who subsequently became pregnant, and consequently resulted in the "biggest anthropogenic medical disaster ever," with more than 10,000 children born with a range of severe deformities, such as phocomelia, as well as thousands of miscarriages.

Thalidomide was introduced in 1957 as a tranquilizer, and was later marketed by the West German pharmaceutical company Chemie Grünenthal under the trade name Contergan as a medication for anxiety, trouble sleeping, tension, and morning sickness. It was introduced as a sedative and medication for morning sickness without having been tested on pregnant women. While initially deemed to be safe in pregnancy, concerns regarding birth defects were noted in 1961, and the medication was removed from the market in Europe that year.

Development of thalidomide

Thalidomide was first developed as a tranquilizer by Swiss pharmaceutical company Ciba in 1953. In 1954, Ciba abandoned the product, and it was acquired by German pharmaceutical company Chemie Grünenthal. The company had been established by Hermann Wirtz Sr, a Nazi Party member, after World War II as a subsidiary of the family's Mäurer & Wirtz company. The company's initial aim was to develop antibiotics for which there was an urgent market need. Wirtz included many former Nazi associates in his company.

Birth defect crisis

The total number of embryos affected by the use of thalidomide during pregnancy is estimated at more than 10,000, and potentially up to 20,000; of these, approximately 40 percent died at or shortly after the time of birth. Those who survived had limb, eye, urinary tract, and heart defects. Its initial entry into the U.S. market was prevented by Frances Oldham Kelsey at the U.S. Food and Drug Administration (FDA). The birth defects of thalidomide led to the development of greater drug regulation and monitoring in many countries.

The severity and location of the deformities depended on how many days into the pregnancy the mother was before beginning treatment; thalidomide taken on the 20th day of pregnancy caused central brain damage, day 21 would damage the eyes, day 22 the ears and face, day 24 the arms, and leg damage would occur if taken up to day 28. Thalidomide did not damage the fetus if taken after 42 days' gestation.

United Kingdom

Artificial limbs made for an affected child in the 1960s by the Department of Health and Social Security's Limb Fitting Centre in Roehampton, London

In the UK, the drug was licensed in 1958 and withdrawn in 1961. Of the approximately 2,000 babies born with defects, around half died within a few months and 466 survived to at least 2010. In 1968, after a long campaign by The Sunday Times, a compensation settlement for the UK victims was reached with Distillers Company (now part of Diageo), which had distributed the drug in the UK. Distillers Biochemicals paid out approximately £28m in compensation following a legal battle.

The British Thalidomide Children's Trust was set up in 1973 as part of a £20 million legal settlement between Distillers Company and 429 children with thalidomide-related disabilities. In 1997, Diageo (formed by a merger between Grand Metropolitan and Guinness, who had taken over Distillers in 1990) made a long-term financial commitment to support the Thalidomide Trust and its beneficiaries. The UK government gave survivors a grant of £20 million, to be distributed through the Thalidomide Trust, in December 2009.

Spain

In Spain, thalidomide was widely available throughout the 1970s, and perhaps even into the 1980s. There were two reasons for this. First, state controls and safeguarding were poor; it was not until 2008 that the government even admitted the country had ever imported thalidomide. Second, Grünenthal failed to insist that its sister company in Madrid warn Spanish doctors, and permitted its sister company not to warn doctors of the defects. The Spanish advocacy group for victims of thalidomide estimates that in 2015, there were 250–300 living victims of thalidomide in Spain.

Australia and New Zealand

Australian obstetrician William McBride raised concern about thalidomide after a midwife called Sister Pat Sparrow first suspected the drug was causing birth defects in the babies of patients under McBride's care at Crown Street Women's Hospital in Sydney. German paediatrician Widukind Lenz, who also suspected the link, is credited with conducting the scientific research that proved thalidomide was causing birth defects in 1961. Further animal tests were conducted by George Somers, Chief Pharmacologist of Distillers Company in Britain, which showed fetal abnormalities in rabbits. Similar results were also published showing these effects in rats and other species.

Lynette Rowe, who was born without limbs, led an Australian class action lawsuit against the drug's manufacturer, Grünenthal, which fought to have the case heard in Germany. The Supreme Court of Victoria dismissed Grünenthal's application in 2012, and the case was heard in Australia. On 17 July 2012, Rowe was awarded an out-of-court settlement, believed to be in the millions of dollars and providing precedence for class action victims to receive further compensation. In February 2014, the Supreme Court of Victoria endorsed the settlement of $89 million AUD to 107 victims of the drug in Australia and New Zealand.

Germany

In East Germany, thalidomide was rejected by the Central Committee of Experts for the Drug Traffic in the GDR, and was never approved for use. There are no known thalidomide children born in East Germany. Meanwhile, in West Germany, it took some time before the increase in dysmelia at the end of the 1950s was connected with thalidomide. In 1958, Karl Beck, a former pediatric doctor in Bayreuth, wrote an article in a local newspaper claiming a relationship between nuclear weapons testing and cases of dysmelia in children. Based on this, FDP leader Erich Mende requested an official statement from the federal government. For statistical reasons, the main data series used to research dysmelia cases started by chance at the same time as the approval date for thalidomide. After the Nazi regime with its Law for the Prevention of Hereditarily Diseased Offspring used mandatory statistical monitoring to commit various crimes, western Germany had been very reluctant to monitor congenital disorders in a similarly strict way. The parliamentary report rejected any relation with radioactivity and the abnormal increase of dysmelia. Also the DFG research project installed after the Mende request was not helpful. The project was led by pathologist Franz Büchner, who ran the project to propagate his teratological theory. Büchner saw lack of healthy nutrition and behavior of the mothers as being more important than genetic reasons. Furthermore, it took a while to appoint a Surgeon General in Germany; the Federal Ministry of Health was not founded until 1962, some months after thalidomide was banned from the market. In West Germany approximately 2,500 children were born with birth defects from thalidomide.

Canada

Despite its severe side effects, thalidomide was sold in pharmacies in Canada until 1962. The effects of thalidomide increased fears regarding the safety of pharmaceutical drugs. The Society of Toxicology of Canada was formed after the effects of thalidomide were made public, focusing on toxicology as a discipline separate from pharmacology. The need for the testing and approval of the toxins in certain pharmaceutical drugs became more important after the disaster. The Society of Toxicology of Canada is responsible for the Conservation Environment Protection Act, focusing on researching the impact to human health of chemical substances. Thalidomide brought on changes in the way drugs are tested, what type of drugs are used during pregnancy, and increased the awareness of potential side effects of drugs.

According to Canadian news magazine programme W5, most, but not all, victims of thalidomide receive annual benefits as compensation from the Government of Canada. Excluded are those who cannot provide the documentation the government requires.

A group of 120 Canadian survivors formed the Thalidomide Victims Association of Canada, the goal of which is to prevent the approval of drugs that could be harmful to pregnant individuals and babies. The members from the thalidomide victims association were involved in the STEPS programme, which aimed to prevent teratogenicity.

United States

1962: FDA pharmacologist Frances Oldham Kelsey receives the President's Award for Distinguished Federal Civilian Service from President John F. Kennedy for blocking sale of thalidomide in the United States.

In the U.S., the FDA refused approval to market thalidomide, saying further studies were needed. This reduced the impact of thalidomide in American patients. The refusal was largely due to pharmacologist Frances Oldham Kelsey who withstood pressure from the Richardson-Merrell Pharmaceuticals Co. Although thalidomide was not approved for sale in the United States at the time, over 2.5 million tablets had been distributed to over 1,000 physicians during a clinical testing programme. It is estimated that nearly 20,000 patients, several hundred of whom were pregnant, were given the drug to help alleviate morning sickness or as a sedative, and at least 17 children were consequently born in the United States with thalidomide-associated deformities. While pregnant, children's television host Sherri Finkbine took thalidomide that her husband had purchased over-the-counter in Europe. When she learned that thalidomide was causing fetal deformities she wanted to abort her pregnancy, but the laws of Arizona allowed abortion only if the mother's life was in danger. Finkbine traveled to Sweden to have the abortion. Thalidomide was found to have deformed the fetus.

For denying the application despite the pressure from Richardson-Merrell Pharmaceuticals Co., Kelsey eventually received the President's Award for Distinguished Federal Civilian Service at a 1962 ceremony with President John F. Kennedy. In September 2010, the FDA honored Kelsey with the first Kelsey award, given annually to an FDA staff member. This came 50 years after Kelsey, then a new medical officer at the agency, first reviewed the application from the William S. Merrell Pharmaceuticals Company of Cincinnati.

Cardiologist Helen B. Taussig learned of the damaging effects of the drug thalidomide on newborns and in 1967, testified before Congress on this matter after a trip to Germany where she worked with infants with phocomelia (severe limb deformities). As a result of her efforts, thalidomide was banned in the United States and Europe.

Austria

Ingeborg Eichler, a member of the Austrian pharmaceutical admission conference, enforced restrictions on the sale of thalidomide (tradename Softenon) under the rules of prescription medication and as a result relatively few affected children were born in Austria and Switzerland.

Japan

In Japan, there are 300 victims of this drug.

Aftermath of scandal

Thalidomide Memorial in Cardiff, Wales

The numerous reports of malformations in babies brought about the awareness of the side effects of the drug on pregnant women. The birth defects caused by the drug thalidomide can range from moderate malformation to more severe forms. Possible birth defects include phocomelia, dysmelia, amelia, bone hypoplasticity, and other congenital defects affecting the ear, heart, or internal organs. Franks et al. looked at how the drug affected newborn babies, the severity of their deformities, and reviewed the drug in its early years. Webb in 1963 also reviewed the history of the drug and the different forms of birth defects it had caused. "The most common form of birth defects from thalidomide is shortened limbs, with the arms being more frequently affected. This syndrome is the presence of deformities of the long bones of the limbs resulting in shortening and other abnormalities."

Grünenthal criminal trial

In 1968, a large criminal trial began in West Germany, charging several Grünenthal officials with negligent homicide and injury. After Grünenthal settled with the victims in April 1970, the trial ended in December 1970 with no finding of guilt. As part of the settlement, Grünenthal paid 100 million DM into a special foundation; the West German government added 320 million DM. The foundation paid victims a one-time sum of 2,500–25,000 DM (depending on severity of disability) and a monthly stipend of 100–450 DM. The monthly stipends have since been raised substantially and are now paid entirely by the government (as the foundation had run out of money). Grünenthal paid another €50 million into the foundation in 2008.

On 31 August 2012, Grünenthal chief executive Harald F. Stock— who served as the chief executive officer of Grünenthal GmbH from January 2009 to May 28, 2013— apologized for the first time for producing the drug and remaining silent about the birth defects. At a ceremony, Stock unveiled a statue of a disabled child to symbolize those harmed by thalidomide and apologized for not trying to reach out to victims for over 50 years. At the time of the apology, there were between 5,000 and 6,000 people still living with Thalidomide-related birth defects. Victim advocates called the apology "insulting" and "too little, too late", and criticized the company for not compensating victims and for their claim that no one could have known the harm the drug caused, arguing that there were plenty of red flags at the time.

Australian National Memorial

On 13 November 2023, the Australian Government announced its intention to make a formal apology to people affected by thalidomide with the unveiling of a national memorial site. Prime Minister Anthony Albanese described the thalidomide tragedy as a "dark chapter" in Australian history, and Health Minister Mark Butler said, "While we cannot change the past or end the physical suffering, I hope these important next steps of recognition and apology will help heal some of the emotional wounds."

Notable cases

Niko von Glasow, German filmmaker
  • Mercédes Benegbi, born with phocomelia of both arms, drove the successful campaign for compensation from her government for Canadians who were affected by thalidomide.
  • Mat Fraser, born with phocomelia of both arms, is an English rock musician, actor, writer and performance artist. He produced a 2002 television documentary, Born Freak, which looked at this historical tradition and its relevance to modern disabled performers. This work has become the subject of academic analysis in the field of disability studies.
  • Niko von Glasow, a thalidomide survivor, produced a documentary called NoBody's Perfect, based on the lives of 12 people affected by the drug, which was released in 2008.
  • Josée Lake is a Canadian Paralympic gold medallist swimmer, thalidomide survivor, and president of the Thalidomide Victims Association of Canada
  • Lorraine Mercer MBE of the United Kingdom, born with phocomelia of both arms and legs, is the only thalidomide survivor to carry the Olympic Torch.
  • Thomas Quasthoff, an internationally acclaimed bass-baritone, who describes himself: "1.34 meters tall, short arms, seven fingers — four right, three left — large, relatively well-formed head, brown eyes, distinctive lips; profession: singer".
  • Alvin Law, Canadian motivational speaker and former radio broadcaster.

Change in drug regulations

The disaster prompted many countries to introduce tougher rules for the testing and licensing of drugs, such as the Kefauver Harris Amendment (US), Directive 65/65/EEC1 (E.U.), and the Medicines Act 1968 (UK). In the United States, the new regulations strengthened the FDA, among other ways, by requiring applicants to prove efficacy and to disclose all side effects encountered in testing. The FDA subsequently initiated the Drug Efficacy Study Implementation to reclassify drugs already on the market.

Superintelligence

From Wikipedia, the free encyclopedia

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the most gifted human minds. Philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence. Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The hypothetical creation of the first superintelligence may or may not result from an intelligence explosion or a technological singularity.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities.

Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Feasibility of artificial superintelligence

Artificial intelligence, especially foundation models, has made rapid progress, surpassing human capabilities in various benchmarks.

The creation of artificial superintelligence (ASI) has been a topic of increasing discussion in recent years, particularly with the rapid advancements in artificial intelligence (AI) technologies.

Progress in AI and claims of AGI

Recent developments in AI, particularly in large language models (LLMs) based on the transformer architecture, have led to significant improvements in various tasks. Models like GPT-3, GPT-4,GPT-5, Claude 3.5 and others have demonstrated capabilities that some researchers argue approach or even exhibit aspects of artificial general intelligence (AGI).

However, the claim that current LLMs constitute AGI is controversial. Critics argue that these models, while impressive, still lack true understanding and rely primarily on memorization.

Pathways to superintelligence

Philosopher David Chalmers argues that AGI is a likely path to ASI. He posits that AI can achieve equivalence to human intelligence, be extended to surpass it, and then be amplified to dominate humans across arbitrary tasks.

More recent research has explored various potential pathways to superintelligence:

  1. Scaling current AI systems – Some researchers argue that continued scaling of existing AI architectures, particularly transformer-based models, could lead to AGI and potentially ASI.
  2. Novel architectures – Others suggest that new AI architectures, potentially inspired by neuroscience, may be necessary to achieve AGI and ASI.
  3. Hybrid systems – Combining different AI approaches, including symbolic AI and neural networks, could potentially lead to more robust and capable systems.

Computational advantages

Artificial systems have several potential advantages over biological intelligence:

  1. Speed – Computer components operate much faster than biological neurons. Modern microprocessors (~2 GHz) are seven orders of magnitude faster than neurons (~200 Hz).
  2. Scalability – AI systems can potentially be scaled up in size and computational capacity more easily than biological brains.
  3. Modularity – Different components of AI systems can be improved or replaced independently.
  4. Memory – AI systems can have perfect recall and vast knowledge bases. It is also much less constrained than humans when it comes to working memory.
  5. Multitasking – AI can perform multiple tasks simultaneously in ways not possible for biological entities.

Potential path through transformer models

Recent advancements in transformer-based models have led some researchers to speculate that the path to ASI might lie in scaling up and improving these architectures. This view suggests that continued improvements in transformer models or similar architectures could lead directly to ASI.

Some experts even argue that current large language models like GPT-5 may already exhibit early signs of AGI or ASI capabilities. This perspective suggests that the transition from current AI to ASI might be more continuous and rapid than previously thought, blurring the lines between narrow AI, AGI, and ASI.

However, this view remains controversial. Critics argue that current models, while impressive, still lack crucial aspects of general intelligence such as true understanding, reasoning, and adaptability across diverse domains.

The debate over whether the path to ASI will involve a distinct AGI phase or a more direct scaling of current technologies is ongoing, with significant implications for AI development strategies and safety considerations.

Challenges and uncertainties

Despite these potential advantages, there are significant challenges and uncertainties in achieving ASI:

  1. Ethical and safety concerns – The development of ASI raises numerous ethical questions and potential risks that need to be addressed.
  2. Computational requirements – The computational resources required for ASI might be far beyond current capabilities.
  3. Fundamental limitations – There may be fundamental limitations to intelligence that apply to both artificial and biological systems.
  4. Unpredictability – The path to ASI and its consequences are highly uncertain and difficult to predict.

As research in AI continues to advance rapidly, the question of the feasibility of ASI remains a topic of intense debate and study in the scientific community.

Feasibility of biological superintelligence

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence and that this process instead is likely to continue. There is no scientific consensus concerning either possibility and in both cases, the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructed by better organizing humans at present levels of individual intelligence. Several writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. A prediction market is sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain−computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Forecasts

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, standard deviation 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.

In 2023, OpenAI leaders Sam Altman, Greg Brockman and Ilya Sutskever published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.

In 2024, Ilya Sutskever left OpenAI to cofound the startup Safe Superintelligence, which focuses solely on creating a superintelligence that is safe by design, while avoiding "distraction by management overhead or product cycles". Despite still offering no product, the startup became valued at $30 billion in February 2025.

In 2025, the forecast scenario "AI 2027" led by Daniel Kokotajlo predicted rapid progress in the automation of coding and AI research, followed by ASI. In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI), a level well below technological singularity, will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”.

Design considerations

The design of superintelligent AI systems raises critical questions about what values and goals these systems should have. Several proposals have been put forward:

Value alignment proposals

  • Coherent extrapolated volition (CEV) – The AI should have the values upon which humans would converge if they were more knowledgeable and rational.
  • Moral rightness (MR) – The AI should be programmed to do what is morally right, relying on its superior cognitive abilities to determine ethical actions.
  • Moral permissibility (MP) – The AI should stay within the bounds of moral permissibility while otherwise pursuing goals aligned with human values (similar to CEV).

Bostrom elaborates on these concepts:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI to do what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR) ...

MR would also appear to have some disadvantages. It relies on the notion of "morally right", a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong ...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways.

Recent developments

Since Bostrom's analysis, new approaches to AI value alignment have emerged:

  • Inverse Reinforcement Learning (IRL) – This technique aims to infer human preferences from observed behavior, potentially offering a more robust approach to value alignment.
  • Constitutional AI – Proposed by Anthropic, this involves training AI systems with explicit ethical principles and constraints.
  • Debate and amplification – These techniques, explored by OpenAI, use AI-assisted debate and iterative processes to better understand and align with human values.

Transformer LLMs and ASI

The rapid advancement of transformer-based LLMs has led to speculation about their potential path to ASI. Some researchers argue that scaled-up versions of these models could exhibit ASI-like capabilities:

  • Emergent abilities – As LLMs increase in size and complexity, they demonstrate unexpected capabilities not present in smaller models.
  • In-context learning – LLMs show the ability to adapt to new tasks without fine-tuning, potentially mimicking general intelligence.
  • Multi-modal integration – Recent models can process and generate various types of data, including text, images, and audio.

However, critics argue that current LLMs lack true understanding and are merely sophisticated pattern matchers, raising questions about their suitability as a path to ASI.

Other perspectives on artificial superintelligence

Additional viewpoints on the development and implications of superintelligence include:

  • Recursive self-improvementI. J. Good proposed the concept of an "intelligence explosion", where an AI system could rapidly improve its own intelligence, potentially leading to superintelligence.
  • Orthogonality thesis – Bostrom argues that an AI's level of intelligence is orthogonal to its final goals, meaning a superintelligent AI could have any set of motivations.
  • Instrumental convergence – Certain instrumental goals (e.g., self-preservation, resource acquisition) might be pursued by a wide range of AI systems, regardless of their final goals.

Challenges and ongoing research

The pursuit of value-aligned AI faces several challenges:

  • Philosophical uncertainty in defining concepts like "moral rightness"
  • Technical complexity in translating ethical principles into precise algorithms
  • Potential for unintended consequences even with well-intentioned approaches

Current research directions include multi-stakeholder approaches to incorporate diverse perspectives, developing methods for scalable oversight of AI systems, and improving techniques for robust value learning.

Al research is rapidly progressing towards superintelligence. Addressing these design challenges remains crucial for creating ASI systems that are both powerful and aligned with human interests.

Potential threat to humanity

The development of artificial superintelligence (ASI) has raised concerns about potential existential risks to humanity. Researchers have proposed various scenarios in which an ASI could pose a significant threat:

Intelligence explosion and control problem

Some researchers argue that through recursive self-improvement, an ASI could rapidly become so powerful as to be beyond human control. This concept, known as an "intelligence explosion", was first proposed by I. J. Good in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

This scenario presents the AI control problem: how to create an ASI that will benefit humanity while avoiding unintended harmful consequences. Eliezer Yudkowsky argues that solving this problem is crucial before ASI is developed, as a superintelligent system might be able to thwart any subsequent attempts at control.

Unintended consequences and goal misalignment

Even with benign intentions, an ASI could potentially cause harm due to misaligned goals or unexpected interpretations of its objectives. Nick Bostrom provides a stark example of this risk:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Stuart Russell offers another illustrative scenario:

A system given the objective of maximizing human happiness might find it easier to rewire human neurology so that humans are always happy regardless of their circumstances, rather than to improve the external world.

These examples highlight the potential for catastrophic outcomes even when an ASI is not explicitly designed to be harmful, underscoring the critical importance of precise goal specification and alignment.

Potential mitigation strategies

Researchers have proposed various approaches to mitigate risks associated with ASI:

  • Capability control – Limiting an ASI's ability to influence the world, such as through physical isolation or restricted access to resources.
  • Motivational control – Designing ASIs with goals that are fundamentally aligned with human values.
  • Ethical AI – Incorporating ethical principles and decision-making frameworks into ASI systems.
  • Oversight and governance – Developing robust international frameworks for the development and deployment of ASI technologies.

Despite these proposed strategies, some experts, such as Roman Yampolskiy, argue that the challenge of controlling a superintelligent AI might be fundamentally unsolvable, emphasizing the need for extreme caution in ASI development.

Debate and skepticism

Not all researchers agree on the likelihood or severity of ASI-related existential risks. Some, like Rodney Brooks, argue that fears of superintelligent AI are overblown and based on unrealistic assumptions about the nature of intelligence and technological progress. Others, such as Joanna Bryson, contend that anthropomorphizing AI systems leads to misplaced concerns about their potential threats.

Recent developments and current perspectives

The rapid advancement of LLMs and other AI technologies has intensified debates about the proximity and potential risks of ASI. While there is no scientific consensus, some researchers and AI practitioners argue that current AI systems may already be approaching AGI or even ASI capabilities.

  • LLM capabilities – Recent LLMs like GPT-4 have demonstrated unexpected abilities in areas such as reasoning, problem-solving, and multi-modal understanding, leading some to speculate about their potential path to ASI.
  • Emergent behaviors – Studies have shown that as AI models increase in size and complexity, they can exhibit emergent capabilities not present in smaller models, potentially indicating a trend towards more general intelligence.
  • Rapid progress – The pace of AI advancement has led some to argue that we may be closer to ASI than previously thought, with potential implications for existential risk.

As of 2024, AI skeptics such as Gary Marcus caution against premature claims of AGI or ASI, arguing that current AI systems, despite their impressive capabilities, still lack true understanding and general intelligence. They emphasize the significant challenges that remain in achieving human-level intelligence, let alone superintelligence.

The debate surrounding the current state and trajectory of AI development underscores the importance of continued research into AI safety and ethics, as well as the need for robust governance frameworks to manage potential risks as AI capabilities continue to advance.

Intelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Intelligence

Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

The term rose to prominence during the early 1900s. Most psychologists believe that intelligence can be divided into various domains or competencies.

Intelligence has been long-studied in humans, and across numerous disciplines. It has also been observed in the cognition of non-human animals. Some researchers have suggested that plants exhibit forms of intelligence, though this remains controversial.

Etymology

The word intelligence derives from the Latin nouns intelligentia or intellēctus, which in turn stem from the verb intelligere, to comprehend or perceive. In the Middle Ages, the word intellectus became the scholarly technical term for understanding and a translation for the Greek philosophical term nous. This term, however, was strongly linked to the metaphysical and cosmological theories of teleological scholasticism, including theories of the immortality of the soul, and the concept of the active intellect (also known as the active intelligence). This approach to the study of nature was strongly rejected by early modern philosophers such as Francis Bacon, Thomas Hobbes, John Locke, and David Hume, all of whom preferred "understanding" (in place of "intellectus" or "intelligence") in their English philosophical works. Hobbes for example, in his Latin De Corpore, used "intellectus intelligit", translated in the English version as "the understanding understandeth", as a typical example of a logical absurdity. "Intelligence" has therefore become less common in English language philosophy, but it has later been taken up (with the scholastic theories that it now implies) in more contemporary psychology.

Definitions

There is controversy over how to define intelligence. Scholars describe its constituent abilities in various ways, and differ in the degree to which they conceive of intelligence as quantifiable.

A consensus report called Intelligence: Knowns and Unknowns, published in 1995 by the Board of Scientific Affairs of the American Psychological Association, states:

Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.

Psychologists and learning researchers also have suggested definitions of intelligence such as the following:

Researcher Quotation
Alfred Binet Judgment, otherwise called "good sense", "practical sense", "initiative", the faculty of adapting one's self to circumstances ... auto-critique.
David Wechsler The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.
Lloyd Humphreys "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills".
Howard Gardner To my mind, a human intellectual competence must entail a set of skills of problem solving—enabling the individual to resolve genuine problems or difficulties that he or she encounters and, when appropriate, to create an effective product—and must also entail the potential for finding or creating problems—and thereby laying the groundwork for the acquisition of new knowledge.
Robert Sternberg & William Salter Goal-directed adaptive behavior.
Reuven Feuerstein The theory of Structural Cognitive Modifiability describes intelligence as "the unique propensity of human beings to change or modify the structure of their cognitive functioning to adapt to the changing demands of a life situation".
Shane Legg & Marcus Hutter A synthesis of 70+ definitions from psychology, philosophy, and AI researchers: "Intelligence measures an agent's ability to achieve goals in a wide range of environments", which has been mathematically formalized.
Alexander Wissner-Gross F = T ∇ S

"Intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, τ. In short, intelligence doesn't like to get trapped".

Human

Human intelligence is the intellectual power of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness. Intelligence enables humans to remember descriptions of things and use those descriptions in future behaviors. It gives humans the cognitive abilities to learn, form concepts, understand, and reason, including the capacities to recognize patterns, innovate, plan, solve problems, and employ language to communicate. These cognitive abilities can be organized into frameworks like fluid vs. crystallized and the Unified Cattell-Horn-Carroll model, which contains abilities like fluid reasoning, perceptual speed, verbal abilities, and others.

Intelligence is different from learning. Learning refers to the act of retaining facts and information or abilities and being able to recall them for future use. Intelligence, on the other hand, is the cognitive ability of someone to perform these and other processes.

Intelligence quotient (IQ)

There have been various attempts to quantify intelligence via psychometric testing. Prominent among these are the various Intelligence Quotient (IQ) tests, which were first developed in the early 20th century to screen children for intellectual disability. Over time, IQ tests became more pervasive, being used to screen immigrants, military recruits, and job applicants. As the tests became more popular, belief that IQ tests measure a fundamental and unchanging attribute that all humans possess became widespread.

An influential theory that promoted the idea that IQ measures a fundamental quality possessed by every person is the theory of General Intelligence, or g factor. The g factor is a construct that summarizes the correlations observed between an individual's scores on a range of cognitive tests.

Today, most psychologists agree that IQ measures at least some aspects of human intelligence, particularly the ability to thrive in an academic context. However, many psychologists question the validity of IQ tests as a measure of intelligence as a whole.

There is debate about the heritability of IQ, that is, what proportion of differences in IQ test performance between individuals are explained by genetic or environmental factors. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.

Emotional

Emotional intelligence is thought to be the ability to convey emotion to others in an understandable way as well as to read the emotions of others accurately. Some theories imply that a heightened emotional intelligence could also lead to faster generating and processing of emotions in addition to the accuracy. In addition, higher emotional intelligence is thought to help us manage emotions, which is beneficial for our problem-solving skills. Emotional intelligence is important to our mental health and has ties to social intelligence.

Social

Social intelligence is the ability to understand the social cues and motivations of others and oneself in social situations. It is thought to be distinct from other types of intelligence, but has relations to emotional intelligence. Social intelligence has coincided with other studies that focus on how we make judgements of others, the accuracy with which we do so, and why people would be viewed as having positive or negative social character. There is debate as to whether or not these studies and social intelligence come from the same theories or if there is a distinction between them, and they are generally thought to be of two different schools of thought.

Moral

Moral intelligence is the capacity to understand right from wrong and to behave based on the value that is believed to be right. It is considered a distinct form of intelligence, independent to both emotional and cognitive intelligence.

Book smart and street smart

Concepts of "book smarts" and "street smart" are contrasting views based on the premise that some people have knowledge gained through academic study, but may lack the experience to sensibly apply that knowledge, while others have knowledge gained through practical experience, but may lack accurate information usually gained through study by which to effectively apply that knowledge. Artificial intelligence researcher Hector Levesque has noted that:

Given the importance of learning through text in our own personal lives and in our culture, it is perhaps surprising how utterly dismissive we tend to be of it. It is sometimes derided as being merely "book knowledge", and having it is being "book smart". In contrast, knowledge acquired through direct experience and apprenticeship is called "street knowledge", and having it is being "street smart".

Nonhuman animal

A crab-eating macaque using a stone

Although humans have been the primary focus of intelligence researchers, scientists have also attempted to investigate animal intelligence, or more broadly, animal cognition. These researchers are interested in studying both mental ability in a particular species, and comparing abilities between species. They study various measures of problem solving, as well as numerical and verbal reasoning abilities. Some challenges include defining intelligence so it has the same meaning across species, and operationalizing a measure that accurately compares mental ability across species and contexts.

Wolfgang Köhler's research on the intelligence of apes is an example of research in this area, as is Stanley Coren's book, The Intelligence of Dogs. Non-human animals particularly noted and studied for their intelligence include chimpanzees, bonobos (notably the language-using Kanzi) and other great apes, dolphins, elephants and to some extent parrots, rats and ravens.

Cephalopod intelligence provides an important comparative study. Cephalopods appear to exhibit characteristics of significant intelligence, yet their nervous systems differ radically from those of backboned animals. Vertebrates such as mammals, birds, reptiles and fish have shown a fairly high degree of intellect that varies according to each species. The same is true with arthropods.

g factor in non-humans

Evidence of a general factor of intelligence has been observed in non-human animals. First described in humans, the g factor has since been identified in a number of non-human species.

Cognitive ability and intelligence cannot be measured using the same, largely verbally dependent, scales developed for humans. Instead, intelligence is measured using a variety of interactive and observational tools focusing on innovation, habit reversal, social learning, and responses to novelty. Studies have shown that g is responsible for 47% of the individual variance in cognitive ability measures in primates and between 55% and 60% of the variance in mice (Locurto, Locurto). These values are similar to the accepted variance in IQ explained by g in humans (40–50%).

Plant

It has been argued that plants should also be classified as intelligent based on their ability to sense and model external and internal environments and adjust their morphology, physiology and phenotype accordingly to ensure self-preservation and reproduction.

A counter argument is that intelligence is commonly understood to involve the creation and use of persistent memories as opposed to computation that does not involve learning. If this is accepted as definitive of intelligence, then it includes the artificial intelligence of robots capable of "machine learning", but excludes those purely autonomic sense-reaction responses that can be observed in many plants. Plants are not limited to automated sensory-motor responses, however, they are capable of discriminating positive and negative experiences and of "learning" (registering memories) from their past experiences. They are also capable of communication, accurately computing their circumstances, using sophisticated cost–benefit analysis and taking tightly controlled actions to mitigate and control the diverse environmental stressors.

Artificial

Scholars studying artificial intelligence have proposed definitions of intelligence that include the intelligence demonstrated by machines. Some of these definitions are meant to be general enough to encompass human and other animal intelligence as well. An intelligent agent can be defined as a system that perceives its environment and takes actions which maximize its chances of success. Kaplan and Haenlein define artificial intelligence as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation". Progress in artificial intelligence can be demonstrated in benchmarks ranging from games to practical tasks such as protein folding. Existing AI lags humans in terms of general intelligence, which is sometimes defined as the "capacity to learn how to carry out a huge range of tasks".

Mathematician Olle Häggström defines intelligence in terms of "optimization power", an agent's capacity for efficient cross-domain optimization of the world according to the agent's preferences, or more simply the ability to "steer the future into regions of possibility ranked high in a preference ordering". In this optimization framework, Deep Blue has the power to "steer a chessboard's future into a subspace of possibility which it labels as 'winning', despite attempts by Garry Kasparov to steer the future elsewhere." Hutter and Legg, after surveying the literature, define intelligence as "an agent's ability to achieve goals in a wide range of environments". While cognitive ability is sometimes measured as a one-dimensional parameter, it could also be represented as a "hypersurface in a multidimensional space" to compare systems that are good at different intellectual tasks. Some skeptics believe that there is no meaningful way to define intelligence, aside from "just pointing to ourselves".

Logical reasoning

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Logical_reasoning   Logical reasoni...