Search This Blog

Wednesday, May 12, 2021

Animal testing

From Wikipedia, the free encyclopedia

Wistar rat.jpg
DescriptionAround 50–100 million vertebrate animals are used in experiments annually.
SubjectsAnimal testing, science, medicine, animal welfare, animal rights, ethics

Animal testing, also known as animal experimentation, animal research and in vivo testing, is the use of non-human animals in experiments that seek to control the variables that affect the behavior or biological system under study. This approach can be contrasted with field studies in which animals are observed in their natural environments or habitats. Experimental research with animals is usually conducted in universities, medical schools, pharmaceutical companies, defense establishments and commercial facilities that provide animal-testing services to industry. The focus of animal testing varies on a continuum from pure research, focusing on developing fundamental knowledge of an organism, to applied research, which may focus on answering some question of great practical importance, such as finding a cure for a disease. Examples of applied research include testing disease treatments, breeding, defense research and toxicology, including cosmetics testing. In education, animal testing is sometimes a component of biology or psychology courses. The practice is regulated to varying degrees in different countries.

It is estimated that the annual use of vertebrate animals—from zebrafish to non-human primates—ranges from tens to more than 100 million. In the European Union, vertebrate species represent 93% of animals used in research, and 11.5 million animals were used there in 2011. By one estimate the number of mice and rats used in the United States alone in 2001 was 80 million. Mice, rats, fish, amphibians and reptiles together account for over 85% of research animals.

Most animals are euthanized after being used in an experiment. Sources of laboratory animals vary between countries and species; most animals are purpose-bred, while a minority are caught in the wild or supplied by dealers who obtain them from auctions and pounds. Supporters of the use of animals in experiments, such as the British Royal Society, argue that virtually every medical achievement in the 20th century relied on the use of animals in some way. The Institute for Laboratory Animal Research of the United States National Academy of Sciences has argued that animal research cannot be replaced by even sophisticated computer models, which are unable to deal with the extremely complex interactions between molecules, cells, tissues, organs, organisms and the environment. Animal rights organizations—such as PETA and BUAV—question the need for and legitimacy of animal testing, arguing that it is cruel and poorly regulated, that medical progress is actually held back by misleading animal models that cannot reliably predict effects in humans, that some of the tests are outdated, that the costs outweigh the benefits, or that animals have the intrinsic right not to be used or harmed in experimentation.

Definitions

The terms animal testing, animal experimentation, animal research, in vivo testing, and vivisection have similar denotations but different connotations. Literally, "vivisection" means "live sectioning" of an animal, and historically referred only to experiments that involved the dissection of live animals. The term is occasionally used to refer pejoratively to any experiment using living animals; for example, the Encyclopædia Britannica defines "vivisection" as: "Operation on a living animal for experimental rather than healing purposes; more broadly, all experimentation on live animals", although dictionaries point out that the broader definition is "used only by people who are opposed to such work". The word has a negative connotation, implying torture, suffering, and death. The word "vivisection" is preferred by those opposed to this research, whereas scientists typically use the term "animal experimentation".

History

The earliest references to animal testing are found in the writings of the Greeks in the 2nd and 4th centuries BC. Aristotle and Erasistratus were among the first to perform experiments on living animals. Galen, a 2nd-century Roman physician, dissected pigs and goats; he is known as the "father of vivisection". Avenzoar, a 12th-century Arabic physician in Moorish Spain also practiced dissection; he introduced animal testing as an experimental method of testing surgical procedures before applying them to human patients.

Animals have repeatedly been used through the history of biomedical research. In 1831, the founders of the Dublin Zoo were members of the medical profession who were interested in studying animals while they were alive and when they were dead. In the 1880s, Louis Pasteur convincingly demonstrated the germ theory of medicine by inducing anthrax in sheep. In the 1880s, Robert Koch infected mice and guinea pigs with anthrax and tuberculosis. In the 1890s, Ivan Pavlov famously used dogs to describe classical conditioning. In World War I, German agents infected sheep bound for Russia with anthrax, and inoculated mules and horses of the French cavalry with the equine glanders disease. Between 1917 and 1918, the Germans infected mules in Argentina bound for American forces, resulting in the death of 200 mules. Insulin was first isolated from dogs in 1922, and revolutionized the treatment of diabetes. On 3 November 1957, a Soviet dog, Laika, became the first of many animals to orbit the earth. In the 1970s, antibiotic treatments and vaccines for leprosy were developed using armadillos, then given to humans. The ability of humans to change the genetics of animals took a large step forwards in 1974 when Rudolf Jaenisch was able to produce the first transgenic mammal, by integrating DNA from the SV40 virus into the genome of mice. This genetic research progressed rapidly and, in 1996, Dolly the sheep was born, the first mammal to be cloned from an adult cell.

Toxicology testing became important in the 20th century. In the 19th century, laws regulating drugs were more relaxed. For example, in the US, the government could only ban a drug after a company had been prosecuted for selling products that harmed customers. However, in response to the Elixir Sulfanilamide disaster of 1937 in which the eponymous drug killed more than 100 users, the US Congress passed laws that required safety testing of drugs on animals before they could be marketed. Other countries enacted similar legislation. In the 1960s, in reaction to the Thalidomide tragedy, further laws were passed requiring safety testing on pregnant animals before a drug can be sold.

Historical debate

Claude Bernard, regarded as the "prince of vivisectors", argued that experiments on animals are "entirely conclusive for the toxicology and hygiene of man".

As the experimentation on animals increased, especially the practice of vivisection, so did criticism and controversy. In 1655, the advocate of Galenic physiology Edmund O'Meara said that "the miserable torture of vivisection places the body in an unnatural state". O'Meara and others argued that animal physiology could be affected by pain during vivisection, rendering results unreliable. There were also objections on an ethical basis, contending that the benefit to humans did not justify the harm to animals. Early objections to animal testing also came from another angle—many people believed that animals were inferior to humans and so different that results from animals could not be applied to humans.

On the other side of the debate, those in favor of animal testing held that experiments on animals were necessary to advance medical and biological knowledge. Claude Bernard—who is sometimes known as the "prince of vivisectors" and the father of physiology, and whose wife, Marie Françoise Martin, founded the first anti-vivisection society in France in 1883—famously wrote in 1865 that "the science of life is a superb and dazzlingly lighted hall which may be reached only by passing through a long and ghastly kitchen". Arguing that "experiments on animals ... are entirely conclusive for the toxicology and hygiene of man...the effects of these substances are the same on man as on animals, save for differences in degree", Bernard established animal experimentation as part of the standard scientific method.

In 1896, the physiologist and physician Dr. Walter B. Cannon said "The antivivisectionists are the second of the two types Theodore Roosevelt described when he said, 'Common sense without conscience may lead to crime, but conscience without common sense may lead to folly, which is the handmaiden of crime.'" These divisions between pro- and anti-animal testing groups first came to public attention during the Brown Dog affair in the early 1900s, when hundreds of medical students clashed with anti-vivisectionists and police over a memorial to a vivisected dog.

One of Pavlov's dogs with a saliva-catch container and tube surgically implanted in his muzzle, Pavlov Museum, 2005

In 1822, the first animal protection law was enacted in the British parliament, followed by the Cruelty to Animals Act (1876), the first law specifically aimed at regulating animal testing. The legislation was promoted by Charles Darwin, who wrote to Ray Lankester in March 1871: "You ask about my opinion on vivisection. I quite agree that it is justifiable for real investigations on physiology; but not for mere damnable and detestable curiosity. It is a subject which makes me sick with horror, so I will not say another word about it, else I shall not sleep to-night." In response to the lobbying by anti-vivisectionists, several organizations were set up in Britain to defend animal research: The Physiological Society was formed in 1876 to give physiologists "mutual benefit and protection", the Association for the Advancement of Medicine by Research was formed in 1882 and focused on policy-making, and the Research Defence Society (now Understanding Animal Research) was formed in 1908 "to make known the facts as to experiments on animals in this country; the immense importance to the welfare of mankind of such experiments and the great saving of human life and health directly attributable to them".

Opposition to the use of animals in medical research first arose in the United States during the 1860s, when Henry Bergh founded the American Society for the Prevention of Cruelty to Animals (ASPCA), with America's first specifically anti-vivisection organization being the American AntiVivisection Society (AAVS), founded in 1883. Antivivisectionists of the era generally believed the spread of mercy was the great cause of civilization, and vivisection was cruel. However, in the USA the antivivisectionists' efforts were defeated in every legislature, overwhelmed by the superior organization and influence of the medical community. Overall, this movement had little legislative success until the passing of the Laboratory Animal Welfare Act, in 1966.

Care and use of animals

Regulations and laws

Worldwide laws regarding testing cosmetics on animals
  
Nationwide ban on all cosmetic testing on animals
  
Partial ban on cosmetic testing on animals1
  
Ban on the sale of cosmetics tested on animals
  
No ban on any cosmetic testing on animals
  
Unknown

1some methods of testing are excluded from the ban or the laws vary within the country

The regulations that apply to animals in laboratories vary across species. In the U.S., under the provisions of the Animal Welfare Act and the Guide for the Care and Use of Laboratory Animals (the Guide), published by the National Academy of Sciences, any procedure can be performed on an animal if it can be successfully argued that it is scientifically justified. In general, researchers are required to consult with the institution's veterinarian and its Institutional Animal Care and Use Committee (IACUC), which every research facility is obliged to maintain. The IACUC must ensure that alternatives, including non-animal alternatives, have been considered, that the experiments are not unnecessarily duplicative, and that pain relief is given unless it would interfere with the study. The IACUCs regulate all vertebrates in testing at institutions receiving federal funds in the USA. Although the provisions of the Animal Welfare Act do not include purpose-bred rodents and birds, these species are equally regulated under Public Health Service policies that govern the IACUCs. The Public Health Service policy oversees the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC). The CDC conducts infectious disease research on nonhuman primates, rabbits, mice, and other animals, while FDA requirements cover use of animals in pharmaceutical research. Animal Welfare Act (AWA) regulations are enforced by the USDA, whereas Public Health Service regulations are enforced by OLAW and in many cases by AAALAC.

According to the 2014 U.S. Department of Agriculture Office of the Inspector General (OIG) report—which looked at the oversight of animal use during a three-year period—"some Institutional Animal Care and Use Committees ...did not adequately approve, monitor, or report on experimental procedures on animals". The OIG found that "as a result, animals are not always receiving basic humane care and treatment and, in some cases, pain and distress are not minimized during and after experimental procedures". According to the report, within a three-year period, nearly half of all American laboratories with regulated species were cited for AWA violations relating to improper IACUC oversight. The USDA OIG made similar findings in a 2005 report. With only a broad number of 120 inspectors, the United States Department of Agriculture (USDA) oversees more than 12,000 facilities involved in research, exhibition, breeding, or dealing of animals. Others have criticized the composition of IACUCs, asserting that the committees are predominantly made up of animal researchers and university representatives who may be biased against animal welfare concerns.

Larry Carbone, a laboratory animal veterinarian, writes that, in his experience, IACUCs take their work very seriously regardless of the species involved, though the use of non-human primates always raises what he calls a "red flag of special concern". A study published in Science magazine in July 2001 confirmed the low reliability of IACUC reviews of animal experiments. Funded by the National Science Foundation, the three-year study found that animal-use committees that do not know the specifics of the university and personnel do not make the same approval decisions as those made by animal-use committees that do know the university and personnel. Specifically, blinded committees more often ask for more information rather than approving studies.

Scientists in India are protesting a recent guideline issued by the University Grants Commission to ban the use of live animals in universities and laboratories.

Numbers

Accurate global figures for animal testing are difficult to obtain; it has been estimated that 100 million vertebrates are experimented on around the world every year, 10–11 million of them in the EU. The Nuffield Council on Bioethics reports that global annual estimates range from 50 to 100 million animals. None of the figures include invertebrates such as shrimp and fruit flies.

The USDA/APHIS has published the 2016 animal research statistics. Overall, the number of animals (covered by the Animal Welfare Act) used in research in the US rose 6.9% from 767,622 (2015) to 820,812 (2016). This includes both public and private institutions. By comparing with EU data, where all vertebrate species are counted, Speaking of Research estimated that around 12 million vertebrates were used in research in the US in 2016. A 2015 article published in the Journal of Medical Ethics, argued that the use of animals in the US has dramatically increased in recent years. Researchers found this increase is largely the result of an increased reliance on genetically modified mice in animal studies.

In 1995, researchers at Tufts University Center for Animals and Public Policy estimated that 14–21 million animals were used in American laboratories in 1992, a reduction from a high of 50 million used in 1970. In 1986, the U.S. Congress Office of Technology Assessment reported that estimates of the animals used in the U.S. range from 10 million to upwards of 100 million each year, and that their own best estimate was at least 17 million to 22 million. In 2016, the Department of Agriculture listed 60,979 dogs, 18,898 cats, 71,188 non-human primates, 183,237 guinea pigs, 102,633 hamsters, 139,391 rabbits, 83,059 farm animals, and 161,467 other mammals, a total of 820,812, a figure that includes all mammals except purpose-bred mice and rats. The use of dogs and cats in research in the U.S. decreased from 1973 to 2016 from 195,157 to 60,979, and from 66,165 to 18,898, respectively.

In the UK, Home Office figures show that 3.79 million procedures were carried out in 2017. 2,960 procedures used non-human primates, down over 50% since 1988. A "procedure" refers here to an experiment that might last minutes, several months, or years. Most animals are used in only one procedure: animals are frequently euthanized after the experiment; however death is the endpoint of some procedures. The procedures conducted on animals in the UK in 2017 were categorised as –

  • 43% (1.61 million) were assessed as sub-threshold
  • 4% (0.14 million) were assessed as non-recovery
  • 36% (1.35 million) were assessed as mild
  • 15% (0.55 million) were assessed as moderate
  • 4% (0.14 million) were assessed as severe

A 'severe' procedure would be, for instance, any test where death is the end-point or fatalities are expected, whereas a 'mild' procedure would be something like a blood test or an MRI scan.

The Three Rs

The Three Rs (3Rs) are guiding principles for more ethical use of animals in testing. These were first described by W.M.S. Russell and R.L. Burch in 1959. The 3Rs state:

  1. Replacement which refers to the preferred use of non-animal methods over animal methods whenever it is possible to achieve the same scientific aims. These methods include computer modeling.
  2. Reduction which refers to methods that enable researchers to obtain comparable levels of information from fewer animals, or to obtain more information from the same number of animals.
  3. Refinement which refers to methods that alleviate or minimize potential pain, suffering or distress, and enhance animal welfare for the animals used. These methods include non-invasive techniques.

The 3Rs have a broader scope than simply encouraging alternatives to animal testing, but aim to improve animal welfare and scientific quality where the use of animals can not be avoided. These 3Rs are now implemented in many testing establishments worldwide and have been adopted by various pieces of legislation and regulations.

Despite the widespread acceptance of the 3Rs, many countries—including Canada, Australia, Israel, South Korea, and Germany—have reported rising experimental use of animals in recent years with increased use of mice and, in some cases, fish while reporting declines in the use of cats, dogs, primates, rabbits, guinea pigs, and hamsters. Along with other countries, China has also escalated its use of GM animals, resulting in an increase in overall animal use.

Invertebrates


Fruit flies are an invertebrate commonly used in animal testing.

Although many more invertebrates than vertebrates are used in animal testing, these studies are largely unregulated by law. The most frequently used invertebrate species are Drosophila melanogaster, a fruit fly, and Caenorhabditis elegans, a nematode worm. In the case of C. elegans, the worm's body is completely transparent and the precise lineage of all the organism's cells is known, while studies in the fly D. melanogaster can use an amazing array of genetic tools. These invertebrates offer some advantages over vertebrates in animal testing, including their short life cycle and the ease with which large numbers may be housed and studied. However, the lack of an adaptive immune system and their simple organs prevent worms from being used in several aspects of medical research such as vaccine development. Similarly, the fruit fly immune system differs greatly from that of humans, and diseases in insects can be different from diseases in vertebrates; however, fruit flies and waxworms can be useful in studies to identify novel virulence factors or pharmacologically active compounds.

Several invertebrate systems are considered acceptable alternatives to vertebrates in early-stage discovery screens. Because of similarities between the innate immune system of insects and mammals, insects can replace mammals in some types of studies. Drosophila melanogaster and the Galleria mellonella waxworm have been particularly important for analysis of virulence traits of mammalian pathogens. Waxworms and other insects have also proven valuable for the identification of pharmaceutical compounds with favorable bioavailability. The decision to adopt such models generally involves accepting a lower degree of biological similarity with mammals for significant gains in experimental throughput.

Vertebrates

Enos the space chimp before insertion into the Mercury-Atlas 5 capsule in 1961
 
This rat is being deprived of rapid eye-movement (REM) sleep using a single platform ("flower pot") technique. The water is within 1 cm of the small flower pot bottom platform where the rat sits. The rat is able to sleep but at the onset of REM sleep muscle tone is lost and the rat would either fall into the water only to clamber back to the pot to avoid drowning, or its nose would become submerged into the water shocking it back to an awakened state.

In the U.S., the numbers of rats and mice used is estimated to be from 11 million to between 20 and 100 million a year. Other rodents commonly used are guinea pigs, hamsters, and gerbils. Mice are the most commonly used vertebrate species because of their size, low cost, ease of handling, and fast reproduction rate. Mice are widely considered to be the best model of inherited human disease and share 95% of their genes with humans. With the advent of genetic engineering technology, genetically modified mice can be generated to order and can provide models for a range of human diseases. Rats are also widely used for physiology, toxicology and cancer research, but genetic manipulation is much harder in rats than in mice, which limits the use of these rodents in basic science.

Over 500,000 fish and 9,000 amphibians were used in the UK in 2016. The main species used is the zebrafish, Danio rerio, which are translucent during their embryonic stage, and the African clawed frog, Xenopus laevis. Over 20,000 rabbits were used for animal testing in the UK in 2004. Albino rabbits are used in eye irritancy tests (Draize test) because rabbits have less tear flow than other animals, and the lack of eye pigment in albinos make the effects easier to visualize. The numbers of rabbits used for this purpose has fallen substantially over the past two decades. In 1996, there were 3,693 procedures on rabbits for eye irritation in the UK, and in 2017 this number was just 63. Rabbits are also frequently used for the production of polyclonal antibodies.

Cats

Cats are most commonly used in neurological research. In 2016, 18,898 cats were used in the United States alone, around a third of which were used in experiments which have the potential to cause "pain and/or distress" though only 0.1% of cat experiments involved potential pain which was not relieved by anesthetics/analgesics. In the UK, just 198 procedures were carried out on cats in 2017. The number has been around 200 for most of the last decade.

Dogs

Dogs are widely used in biomedical research, testing, and education—particularly beagles, because they are gentle and easy to handle, and to allow for comparisons with historical data from beagles (a Reduction technique).(citation needed) They are used as models for human and veterinary diseases in cardiology, endocrinology, and bone and joint studies, research that tends to be highly invasive, according to the Humane Society of the United States. The most common use of dogs is in the safety assessment of new medicines for human or veterinary use as a second species following testing in rodents, in accordance with the regulations set out in the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. One of the most significant advancements in medical science involves the use of dogs in developing the answers to insulin production in the body for diabetics and the role of the pancreas in this process. They found that the pancreas was responsible for producing insulin in the body and that removal of the pancreas, resulted in the development of diabetes in the dog. After re-injecting the pancreatic extract, (insulin), the blood glucose levels were significantly lowered. The advancements made in this research involving the use of dogs has resulted in a definite improvement in the quality of life for both humans and animals.

The U.S. Department of Agriculture's Animal Welfare Report shows that 60,979 dogs were used in USDA-registered facilities in 2016. In the UK, according to the UK Home Office, there were 3,847 procedures on dogs in 2017. Of the other large EU users of dogs, Germany conducted 3,976 procedures on dogs in 2016 and France conducted 4,204 procedures in 2016. In both cases this represents under 0.2% of the total number of procedures conducted on animals in the respective countries.

Non-human primates

77-cm primate cage.jpg

Non-human primates (NHPs) are used in toxicology tests, studies of AIDS and hepatitis, studies of neurology, behavior and cognition, reproduction, genetics, and xenotransplantation. They are caught in the wild or purpose-bred. In the United States and China, most primates are domestically purpose-bred, whereas in Europe the majority are imported purpose-bred. The European Commission reported that in 2011, 6,012 monkeys were experimented on in European laboratories. According to the U.S. Department of Agriculture, there were 71,188 monkeys in U.S. laboratories in 2016. 23,465 monkeys were imported into the U.S. in 2014 including 929 who were caught in the wild. Most of the NHPs used in experiments are macaques; but marmosets, spider monkeys, and squirrel monkeys are also used, and baboons and chimpanzees are used in the US. As of 2015, there are approximately 730 chimpanzees in U.S. laboratories.

In a survey in 2003, it was found that 89% of singly-housed primates exhibited self-injurious or abnormal stereotypyical behaviors including pacing, rocking, hair pulling, and biting among others.

The first transgenic primate was produced in 2001, with the development of a method that could introduce new genes into a rhesus macaque. This transgenic technology is now being applied in the search for a treatment for the genetic disorder Huntington's disease. Notable studies on non-human primates have been part of the polio vaccine development, and development of Deep Brain Stimulation, and their current heaviest non-toxicological use occurs in the monkey AIDS model, SIV. In 2008 a proposal to ban all primates experiments in the EU has sparked a vigorous debate.

Sources

Animals used by laboratories are largely supplied by specialist dealers. Sources differ for vertebrate and invertebrate animals. Most laboratories breed and raise flies and worms themselves, using strains and mutants supplied from a few main stock centers. For vertebrates, sources include breeders and dealers like Covance and Charles River Laboratories who supply purpose-bred and wild-caught animals; businesses that trade in wild animals such as Nafovanny; and dealers who supply animals sourced from pounds, auctions, and newspaper ads. Animal shelters also supply the laboratories directly. Large centers also exist to distribute strains of genetically modified animals; the International Knockout Mouse Consortium, for example, aims to provide knockout mice for every gene in the mouse genome.

A laboratory mouse cage. Mice are either bred commercially, or raised in the laboratory.

In the U.S., Class A breeders are licensed by the U.S. Department of Agriculture (USDA) to sell animals for research purposes, while Class B dealers are licensed to buy animals from "random sources" such as auctions, pound seizure, and newspaper ads. Some Class B dealers have been accused of kidnapping pets and illegally trapping strays, a practice known as bunching. It was in part out of public concern over the sale of pets to research facilities that the 1966 Laboratory Animal Welfare Act was ushered in—the Senate Committee on Commerce reported in 1966 that stolen pets had been retrieved from Veterans Administration facilities, the Mayo Institute, the University of Pennsylvania, Stanford University, and Harvard and Yale Medical Schools. The USDA recovered at least a dozen stolen pets during a raid on a Class B dealer in Arkansas in 2003.

Four states in the U.S.—Minnesota, Utah, Oklahoma, and Iowa—require their shelters to provide animals to research facilities. Fourteen states explicitly prohibit the practice, while the remainder either allow it or have no relevant legislation.

In the European Union, animal sources are governed by Council Directive 86/609/EEC, which requires lab animals to be specially bred, unless the animal has been lawfully imported and is not a wild animal or a stray. The latter requirement may also be exempted by special arrangement. In 2010 the Directive was revised with EU Directive 2010/63/EU. In the UK, most animals used in experiments are bred for the purpose under the 1988 Animal Protection Act, but wild-caught primates may be used if exceptional and specific justification can be established. The United States also allows the use of wild-caught primates; between 1995 and 1999, 1,580 wild baboons were imported into the U.S. Over half the primates imported between 1995 and 2000 were handled by Charles River Laboratories, or by Covance, which is the single largest importer of primates into the U.S.

Pain and suffering

Prior to dissection for educational purposes, chloroform was administered to this common sand frog to induce anesthesia and death.
 
 
World wide recognition of nonhuman animal sentience and suffering
  
National recognition of animal sentience
  
Partial recognition of animal sentience1
  
National recognition of animal suffering
  
Partial recognition of animal suffering2
  
No official recognition of animal sentience or suffering
  
Unknown
1certain animals are excluded, only mental health is acknowledged, and/or the laws vary internally
2only includes domestic animals

The extent to which animal testing causes pain and suffering, and the capacity of animals to experience and comprehend them, is the subject of much debate.

According to the USDA, in 2016 501,560 animals (61%) (not including rats, mice, birds, or invertebrates) were used in procedures that did not include more than momentary pain or distress. 247,882 (31%) animals were used in procedures in which pain or distress was relieved by anesthesia, while 71,370 (9%) were used in studies that would cause pain or distress that would not be relieved.

Since 2014, in the UK, every research procedure was retrospectively assessed for severity. The five categories are "sub-threshold", "mild", "moderate", "severe" and "non-recovery", the latter being procedures in which an animal is anesthetized and subsequently killed without recovering consciousness. In 2017, 43% (1.61 million) were assessed as sub-threshold, 4% (0.14 million) were assessed as non-recovery, 36% (1.35 million) were assessed as mild, 15% (0.55 million) were assessed as moderate and 4% (0.14 million) were assessed as severe.

The idea that animals might not feel pain as human beings feel it traces back to the 17th-century French philosopher, René Descartes, who argued that animals do not experience pain and suffering because they lack consciousness. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. In his interactions with scientists and other veterinarians, he was regularly asked to "prove" that animals are conscious, and to provide "scientifically acceptable" grounds for claiming that they feel pain. Carbone writes that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal, noting that although the argument that animals have at least simple conscious thoughts and feelings has strong support, some critics continue to question how reliably animal mental states can be determined. However, some canine experts are stating that, while intelligence does differ animal to animal, dogs have the intelligence of a two to two-and-a-half-year old. This does support the idea that dogs, at the very least, have some form of consciousness. The ability of invertebrates to experience pain and suffering is less clear, however, legislation in several countries (e.g. U.K., New Zealand, Norway) protects some invertebrate species if they are being used in animal testing.

In the U.S., the defining text on animal welfare regulation in animal testing is the Guide for the Care and Use of Laboratory Animals. This defines the parameters that govern animal testing in the U.S. It states "The ability to experience and respond to pain is widespread in the animal kingdom...Pain is a stressor and, if not relieved, can lead to unacceptable levels of stress and distress in animals." The Guide states that the ability to recognize the symptoms of pain in different species is vital in efficiently applying pain relief and that it is essential for the people caring for and using animals to be entirely familiar with these symptoms. On the subject of analgesics used to relieve pain, the Guide states "The selection of the most appropriate analgesic or anesthetic should reflect professional judgment as to which best meets clinical and humane requirements without compromising the scientific aspects of the research protocol". Accordingly, all issues of animal pain and distress, and their potential treatment with analgesia and anesthesia, are required regulatory issues in receiving animal protocol approval.

In 2019, Katrien Devolder and Matthias Eggel proposed gene editing research animals to remove the ability to feel pain. This would be an intermediate step towards eventually stopping all experimentation on animals and adopting alternatives. Additionally, this would not stop research animals from experiencing psychological harm.

Euthanasia

Regulations require that scientists use as few animals as possible, especially for terminal experiments. However, while policy makers consider suffering to be the central issue and see animal euthanasia as a way to reduce suffering, others, such as the RSPCA, argue that the lives of laboratory animals have intrinsic value. Regulations focus on whether particular methods cause pain and suffering, not whether their death is undesirable in itself. The animals are euthanized at the end of studies for sample collection or post-mortem examination; during studies if their pain or suffering falls into certain categories regarded as unacceptable, such as depression, infection that is unresponsive to treatment, or the failure of large animals to eat for five days; or when they are unsuitable for breeding or unwanted for some other reason.

Methods of euthanizing laboratory animals are chosen to induce rapid unconsciousness and death without pain or distress. The methods that are preferred are those published by councils of veterinarians. The animal can be made to inhale a gas, such as carbon monoxide and carbon dioxide, by being placed in a chamber, or by use of a face mask, with or without prior sedation or anesthesia. Sedatives or anesthetics such as barbiturates can be given intravenously, or inhalant anesthetics may be used. Amphibians and fish may be immersed in water containing an anesthetic such as tricaine. Physical methods are also used, with or without sedation or anesthesia depending on the method. Recommended methods include decapitation (beheading) for small rodents or rabbits. Cervical dislocation (breaking the neck or spine) may be used for birds, mice, and immature rats and rabbits. High-intensity microwave irradiation of the brain can preserve brain tissue and induce death in less than 1 second, but this is currently only used on rodents. Captive bolts may be used, typically on dogs, ruminants, horses, pigs and rabbits. It causes death by a concussion to the brain. Gunshot may be used, but only in cases where a penetrating captive bolt may not be used. Some physical methods are only acceptable after the animal is unconscious. Electrocution may be used for cattle, sheep, swine, foxes, and mink after the animals are unconscious, often by a prior electrical stun. Pithing (inserting a tool into the base of the brain) is usable on animals already unconscious. Slow or rapid freezing, or inducing air embolism are acceptable only with prior anesthesia to induce unconsciousness.

Research classification

Pure research

Basic or pure research investigates how organisms behave, develop, and function. Those opposed to animal testing object that pure research may have little or no practical purpose, but researchers argue that it forms the necessary basis for the development of applied research, rendering the distinction between pure and applied research—research that has a specific practical aim—unclear. Pure research uses larger numbers and a greater variety of animals than applied research. Fruit flies, nematode worms, mice and rats together account for the vast majority, though small numbers of other species are used, ranging from sea slugs through to armadillos. Examples of the types of animals and experiments used in basic research include:

  • Studies on embryogenesis and developmental biology. Mutants are created by adding transposons into their genomes, or specific genes are deleted by gene targeting. By studying the changes in development these changes produce, scientists aim to understand both how organisms normally develop, and what can go wrong in this process. These studies are particularly powerful since the basic controls of development, such as the homeobox genes, have similar functions in organisms as diverse as fruit flies and man.
  • Experiments into behavior, to understand how organisms detect and interact with each other and their environment, in which fruit flies, worms, mice, and rats are all widely used. Studies of brain function, such as memory and social behavior, often use rats and birds. For some species, behavioral research is combined with enrichment strategies for animals in captivity because it allows them to engage in a wider range of activities.
  • Breeding experiments to study evolution and genetics. Laboratory mice, flies, fish, and worms are inbred through many generations to create strains with defined characteristics. These provide animals of a known genetic background, an important tool for genetic analyses. Larger mammals are rarely bred specifically for such studies due to their slow rate of reproduction, though some scientists take advantage of inbred domesticated animals, such as dog or cattle breeds, for comparative purposes. Scientists studying how animals evolve use many animal species to see how variations in where and how an organism lives (their niche) produce adaptations in their physiology and morphology. As an example, sticklebacks are now being used to study how many and which types of mutations are selected to produce adaptations in animals' morphology during the evolution of new species.

Applied research

Applied research aims to solve specific and practical problems. These may involve the use of animal models of diseases or conditions, which are often discovered or generated by pure research programmes. In turn, such applied studies may be an early stage in the drug discovery process. Examples include:

  • Genetic modification of animals to study disease. Transgenic animals have specific genes inserted, modified or removed, to mimic specific conditions such as single gene disorders, such as Huntington's disease. Other models mimic complex, multifactorial diseases with genetic components, such as diabetes, or even transgenic mice that carry the same mutations that occur during the development of cancer. These models allow investigations on how and why the disease develops, as well as providing ways to develop and test new treatments. The vast majority of these transgenic models of human disease are lines of mice, the mammalian species in which genetic modification is most efficient. Smaller numbers of other animals are also used, including rats, pigs, sheep, fish, birds, and amphibians.
  • Studies on models of naturally occurring disease and condition. Certain domestic and wild animals have a natural propensity or predisposition for certain conditions that are also found in humans. Cats are used as a model to develop immunodeficiency virus vaccines and to study leukemia because their natural predisposition to FIV and Feline leukemia virus. Certain breeds of dog suffer from narcolepsy making them the major model used to study the human condition. Armadillos and humans are among only a few animal species that naturally suffer from leprosy; as the bacteria responsible for this disease cannot yet be grown in culture, armadillos are the primary source of bacilli used in leprosy vaccines.
  • Studies on induced animal models of human diseases. Here, an animal is treated so that it develops pathology and symptoms that resemble a human disease. Examples include restricting blood flow to the brain to induce stroke, or giving neurotoxins that cause damage similar to that seen in Parkinson's disease. Much animal research into potential treatments for humans is wasted because it is poorly conducted and not evaluated through systematic reviews. For example, although such models are now widely used to study Parkinson's disease, the British anti-vivisection interest group BUAV argues that these models only superficially resemble the disease symptoms, without the same time course or cellular pathology. In contrast, scientists assessing the usefulness of animal models of Parkinson's disease, as well as the medical research charity The Parkinson's Appeal, state that these models were invaluable and that they led to improved surgical treatments such as pallidotomy, new drug treatments such as levodopa, and later deep brain stimulation.
  • Animal testing has also included the use of placebo testing. In these cases animals are treated with a substance that produces no pharmacological effect, but is administered in order to determine any biological alterations due to the experience of a substance being administered, and the results are compared with those obtained with an active compound.

Xenotransplantation

Xenotransplantation research involves transplanting tissues or organs from one species to another, as a way to overcome the shortage of human organs for use in organ transplants. Current research involves using primates as the recipients of organs from pigs that have been genetically modified to reduce the primates' immune response against the pig tissue. Although transplant rejection remains a problem, recent clinical trials that involved implanting pig insulin-secreting cells into diabetics did reduce these people's need for insulin.

Documents released to the news media by the animal rights organization Uncaged Campaigns showed that, between 1994 and 2000, wild baboons imported to the UK from Africa by Imutran Ltd, a subsidiary of Novartis Pharma AG, in conjunction with Cambridge University and Huntingdon Life Sciences, to be used in experiments that involved grafting pig tissues, suffered serious and sometimes fatal injuries. A scandal occurred when it was revealed that the company had communicated with the British government in an attempt to avoid regulation.

Toxicology testing

Toxicology testing, also known as safety testing, is conducted by pharmaceutical companies testing drugs, or by contract animal testing facilities, such as Huntingdon Life Sciences, on behalf of a wide variety of customers. According to 2005 EU figures, around one million animals are used every year in Europe in toxicology tests; which are about 10% of all procedures. According to Nature, 5,000 animals are used for each chemical being tested, with 12,000 needed to test pesticides. The tests are conducted without anesthesia, because interactions between drugs can affect how animals detoxify chemicals, and may interfere with the results.

Toxicology tests are used to examine finished products such as pesticides, medications, food additives, packing materials, and air freshener, or their chemical ingredients. Most tests involve testing ingredients rather than finished products, but according to BUAV, manufacturers believe these tests overestimate the toxic effects of substances; they therefore repeat the tests using their finished products to obtain a less toxic label.

The substances are applied to the skin or dripped into the eyes; injected intravenously, intramuscularly, or subcutaneously; inhaled either by placing a mask over the animals and restraining them, or by placing them in an inhalation chamber; or administered orally, through a tube into the stomach, or simply in the animal's food. Doses may be given once, repeated regularly for many months, or for the lifespan of the animal.

There are several different types of acute toxicity tests. The LD50 ("Lethal Dose 50%") test is used to evaluate the toxicity of a substance by determining the dose required to kill 50% of the test animal population. This test was removed from OECD international guidelines in 2002, replaced by methods such as the fixed dose procedure, which use fewer animals and cause less suffering. Abbott writes that, as of 2005, "the LD50 acute toxicity test ... still accounts for one-third of all animal [toxicity] tests worldwide".

Irritancy can be measured using the Draize test, where a test substance is applied to an animal's eyes or skin, usually an albino rabbit. For Draize eye testing, the test involves observing the effects of the substance at intervals and grading any damage or irritation, but the test should be halted and the animal killed if it shows "continuing signs of severe pain or distress". The Humane Society of the United States writes that the procedure can cause redness, ulceration, hemorrhaging, cloudiness, or even blindness. This test has also been criticized by scientists for being cruel and inaccurate, subjective, over-sensitive, and failing to reflect human exposures in the real world. Although no accepted in vitro alternatives exist, a modified form of the Draize test called the low volume eye test may reduce suffering and provide more realistic results and this was adopted as the new standard in September 2009. However, the Draize test will still be used for substances that are not severe irritants.

The most stringent tests are reserved for drugs and foodstuffs. For these, a number of tests are performed, lasting less than a month (acute), one to three months (subchronic), and more than three months (chronic) to test general toxicity (damage to organs), eye and skin irritancy, mutagenicity, carcinogenicity, teratogenicity, and reproductive problems. The cost of the full complement of tests is several million dollars per substance and it may take three or four years to complete.

These toxicity tests provide, in the words of a 2006 United States National Academy of Sciences report, "critical information for assessing hazard and risk potential". Animal tests may overestimate risk, with false positive results being a particular problem, but false positives appear not to be prohibitively common. Variability in results arises from using the effects of high doses of chemicals in small numbers of laboratory animals to try to predict the effects of low doses in large numbers of humans. Although relationships do exist, opinion is divided on how to use data on one species to predict the exact level of risk in another.

Scientists face growing pressure to move away from using traditional animal toxicity tests to determine whether manufactured chemicals are safe. Among variety of approaches to toxicity evaluation the ones which have attracted increasing interests are in vitro cell-based sensing methods applying fluorescence.

Cosmetics testing

The "Leaping Bunny" logo: Some products in Europe that are not tested on animals carry this symbol.

Cosmetics testing on animals is particularly controversial. Such tests, which are still conducted in the U.S., involve general toxicity, eye and skin irritancy, phototoxicity (toxicity triggered by ultraviolet light) and mutagenicity.

Cosmetics testing on animals is banned in India, the European Union, Israel and Norway while legislation in the U.S. and Brazil is currently considering similar bans. In 2002, after 13 years of discussion, the European Union agreed to phase in a near-total ban on the sale of animal-tested cosmetics by 2009, and to ban all cosmetics-related animal testing. France, which is home to the world's largest cosmetics company, L'Oreal, has protested the proposed ban by lodging a case at the European Court of Justice in Luxembourg, asking that the ban be quashed. The ban is also opposed by the European Federation for Cosmetics Ingredients, which represents 70 companies in Switzerland, Belgium, France, Germany, and Italy. In October 2014, India passed stricter laws that also ban the importation of any cosmetic products that are tested on animals.

Drug testing

Before the early 20th century, laws regulating drugs were lax. Currently, all new pharmaceuticals undergo rigorous animal testing before being licensed for human use. Tests on pharmaceutical products involve:

  • metabolic tests, investigating pharmacokinetics—how drugs are absorbed, metabolized and excreted by the body when introduced orally, intravenously, intraperitoneally, intramuscularly, or transdermally.
  • toxicology tests, which gauge acute, sub-acute, and chronic toxicity. Acute toxicity is studied by using a rising dose until signs of toxicity become apparent. Current European legislation demands that "acute toxicity tests must be carried out in two or more mammalian species" covering "at least two different routes of administration". Sub-acute toxicity is where the drug is given to the animals for four to six weeks in doses below the level at which it causes rapid poisoning, in order to discover if any toxic drug metabolites build up over time. Testing for chronic toxicity can last up to two years and, in the European Union, is required to involve two species of mammals, one of which must be non-rodent.
  • efficacy studies, which test whether experimental drugs work by inducing the appropriate illness in animals. The drug is then administered in a double-blind controlled trial, which allows researchers to determine the effect of the drug and the dose-response curve.
  • Specific tests on reproductive function, embryonic toxicity, or carcinogenic potential can all be required by law, depending on the result of other studies and the type of drug being tested.

Education

It is estimated that 20 million animals are used annually for educational purposes in the United States including, classroom observational exercises, dissections and live-animal surgeries. Frogs, fetal pigs, perch, cats, earthworms, grasshoppers, crayfish and starfish are commonly used in classroom dissections. Alternatives to the use of animals in classroom dissections are widely used, with many U.S. States and school districts mandating students be offered the choice to not dissect. Citing the wide availability of alternatives and the decimation of local frog species, India banned dissections in 2014.

The Sonoran Arthropod Institute hosts an annual Invertebrates in Education and Conservation Conference to discuss the use of invertebrates in education. There also are efforts in many countries to find alternatives to using animals in education. The NORINA database, maintained by Norecopa, lists products that may be used as alternatives or supplements to animal use in education, and in the training of personnel who work with animals. These include alternatives to dissection in schools. InterNICHE has a similar database and a loans system.

In November 2013, the U.S.-based company Backyard Brains released for sale to the public what they call the "Roboroach", an "electronic backpack" that can be attached to cockroaches. The operator is required to amputate a cockroach's antennae, use sandpaper to wear down the shell, insert a wire into the thorax, and then glue the electrodes and circuit board onto the insect's back. A mobile phone app can then be used to control it via Bluetooth. It has been suggested that the use of such a device may be a teaching aid that can promote interest in science. The makers of the "Roboroach" have been funded by the National Institute of Mental Health and state that the device is intended to encourage children to become interested in neuroscience.

Defense

Animals are used by the military to develop weapons, vaccines, battlefield surgical techniques, and defensive clothing. For example, in 2008 the United States Defense Advanced Research Projects Agency used live pigs to study the effects of improvised explosive device explosions on internal organs, especially the brain.

In the US military, goats are commonly used to train combat medics. (Goats have become the main animal species used for this purpose after the Pentagon phased out using dogs for medical training in the 1980s.) While modern mannequins used in medical training are quite efficient in simulating the behavior of a human body, some trainees feel that "the goat exercise provide[s] a sense of urgency that only real life trauma can provide". Nevertheless, in 2014, the U.S. Coast Guard announced that it would reduce the number of animals it uses in its training exercises by half after PETA released video showing Guard members cutting off the limbs of unconscious goats with tree trimmers and inflicting other injuries with a shotgun, pistol, ax and a scalpel. That same year, citing the availability of human simulators and other alternatives, the Department of Defense announced it would begin reducing the number of animals it uses in various training programs. In 2013, several Navy medical centers stopped using ferrets in intubation exercises after complaints from PETA.

Besides the United States, six out of 28 NATO countries, including Poland and Denmark, use live animals for combat medic training.

Ethics

Viewpoints

Monument for animals used in testing at Keio University

The moral and ethical questions raised by performing experiments on animals are subject to debate, and viewpoints have shifted significantly over the 20th century. There remain disagreements about which procedures are useful for which purposes, as well as disagreements over which ethical principles apply to which species.

A 2015 Gallup poll found that 67% of Americans were "very concerned" or "somewhat concerned" about animals used in research. A Pew poll taken the same year found 50% of American adults opposed the use of animals in research.

Still, a wide range of viewpoints exist. The view that animals have moral rights (animal rights) is a philosophical position proposed by Tom Regan, among others, who argues that animals are beings with beliefs and desires, and as such are the "subjects of a life" with moral value and therefore moral rights. Regan still sees ethical differences between killing human and non-human animals, and argues that to save the former it is permissible to kill the latter. Likewise, a "moral dilemma" view suggests that avoiding potential benefit to humans is unacceptable on similar grounds, and holds the issue to be a dilemma in balancing such harm to humans to the harm done to animals in research. In contrast, an abolitionist view in animal rights holds that there is no moral justification for any harmful research on animals that is not to the benefit of the individual animal. Bernard Rollin argues that benefits to human beings cannot outweigh animal suffering, and that human beings have no moral right to use an animal in ways that do not benefit that individual. Donald Watson has stated that vivisection and animal experimentation "is probably the cruelest of all Man's attack on the rest of Creation." Another prominent position is that of philosopher Peter Singer, who argues that there are no grounds to include a being's species in considerations of whether their suffering is important in utilitarian moral considerations. Malcolm Macleod and collaborators argue that most controlled animal studies do not employ randomization, allocation concealment, and blinding outcome assessment, and that failure to employ these features exaggerates the apparent benefit of drugs tested in animals, leading to a failure to translate much animal research for human benefit.

Governments such as the Netherlands and New Zealand have responded to the public's concerns by outlawing invasive experiments on certain classes of non-human primates, particularly the great apes. In 2015, captive chimpanzees in the U.S. were added to the Endangered Species Act adding new road blocks to those wishing to experiment on them. Similarly, citing ethical considerations and the availability of alternative research methods, the U.S. NIH announced in 2013 that it would dramatically reduce and eventually phase out experiments on chimpanzees.

The British government has required that the cost to animals in an experiment be weighed against the gain in knowledge. Some medical schools and agencies in China, Japan, and South Korea have built cenotaphs for killed animals. In Japan there are also annual memorial services (Ireisai 慰霊祭) for animals sacrificed at medical school.

Dolly the sheep: the first clone produced from the somatic cells of an adult mammal

Various specific cases of animal testing have drawn attention, including both instances of beneficial scientific research, and instances of alleged ethical violations by those performing the tests. The fundamental properties of muscle physiology were determined with work done using frog muscles (including the force generating mechanism of all muscle, the length-tension relationship, and the force-velocity curve), and frogs are still the preferred model organism due to the long survival of muscles in vitro and the possibility of isolating intact single-fiber preparations (not possible in other organisms). Modern physical therapy and the understanding and treatment of muscular disorders is based on this work and subsequent work in mice (often engineered to express disease states such as muscular dystrophy). In February 1997 a team at the Roslin Institute in Scotland announced the birth of Dolly the sheep, the first mammal to be cloned from an adult somatic cell.

Concerns have been raised over the mistreatment of primates undergoing testing. In 1985 the case of Britches, a macaque monkey at the University of California, Riverside, gained public attention. He had his eyelids sewn shut and a sonar sensor on his head as part of an experiment to test sensory substitution devices for blind people. The laboratory was raided by Animal Liberation Front in 1985, removing Britches and 466 other animals. The National Institutes of Health conducted an eight-month investigation and concluded, however, that no corrective action was necessary. During the 2000s other cases have made headlines, including experiments at the University of Cambridge and Columbia University in 2002. In 2004 and 2005, undercover footage of staff of Covance's, a contract research organization that provides animal testing services, Virginia lab was shot by People for the Ethical Treatment of Animals (PETA). Following release of the footage, the U.S. Department of Agriculture fined Covance $8,720 for 16 citations, three of which involved lab monkeys; the other citations involved administrative issues and equipment.

Threats to researchers

Threats of violence to animal researchers are not uncommon.

In 2006, a primate researcher at the University of California, Los Angeles (UCLA) shut down the experiments in his lab after threats from animal rights activists. The researcher had received a grant to use 30 macaque monkeys for vision experiments; each monkey was anesthetized for a single physiological experiment lasting up to 120 hours, and then euthanized. The researcher's name, phone number, and address were posted on the website of the Primate Freedom Project. Demonstrations were held in front of his home. A Molotov cocktail was placed on the porch of what was believed to be the home of another UCLA primate researcher; instead, it was accidentally left on the porch of an elderly woman unrelated to the university. The Animal Liberation Front claimed responsibility for the attack. As a result of the campaign, the researcher sent an email to the Primate Freedom Project stating "you win", and "please don't bother my family anymore". In another incident at UCLA in June 2007, the Animal Liberation Brigade placed a bomb under the car of a UCLA children's ophthalmologist who experiments on cats and rhesus monkeys; the bomb had a faulty fuse and did not detonate.

In 1997, PETA filmed staff from Huntingdon Life Sciences, showing dogs being mistreated. The employees responsible were dismissed, with two given community service orders and ordered to pay £250 costs, the first lab technicians to have been prosecuted for animal cruelty in the UK. The Stop Huntingdon Animal Cruelty campaign used tactics ranging from non-violent protest to the alleged firebombing of houses owned by executives associated with HLS's clients and investors. The Southern Poverty Law Center, which monitors US domestic extremism, has described SHAC's modus operandi as "frankly terroristic tactics similar to those of anti-abortion extremists," and in 2005 an official with the FBI's counter-terrorism division referred to SHAC's activities in the United States as domestic terrorist threats. 13 members of SHAC were jailed for between 15 months and eleven years on charges of conspiracy to blackmail or harm HLS and its suppliers.

These attacks—as well as similar incidents that caused the Southern Poverty Law Center to declare in 2002 that the animal rights movement had "clearly taken a turn toward the more extreme"—prompted the US government to pass the Animal Enterprise Terrorism Act and the UK government to add the offense of "Intimidation of persons connected with animal research organisation" to the Serious Organised Crime and Police Act 2005. Such legislation and the arrest and imprisonment of activists may have decreased the incidence of attacks.

Scientific criticism

Systematic reviews have pointed out that animal testing often fails to accurately mirror outcomes in humans. For instance, a 2013 review noted that some 100 vaccines have been shown to prevent HIV in animals, yet none of them have worked on humans. Effects seen in animals may not be replicated in humans, and vice versa. Many corticosteroids cause birth defects in animals, but not in humans. Conversely, thalidomide causes serious birth defects in humans, but not in animals. A 2004 paper concluded that much animal research is wasted because systemic reviews are not used, and due to poor methodology. A 2006 review found multiple studies where there were promising results for new drugs in animals, but human clinical studies did not show the same results. The researchers suggested that this might be due to researcher bias, or simply because animal models do not accurately reflect human biology. Lack of meta-reviews may be partially to blame. Poor methodology is an issue in many studies. A 2009 review noted that many animal experiments did not use blinded experiments, a key element of many scientific studies in which researchers are not told about the part of the study they are working on to reduce bias.

Alternatives to animal testing

Most scientists and governments state that animal testing should cause as little suffering to animals as possible, and that animal tests should only be performed where necessary. The "Three Rs" are guiding principles for the use of animals in research in most countries. Whilst replacement of animals, i.e. alternatives to animal testing, is one of the principles, their scope is much broader. Although such principles have been welcomed as a step forwards by some animal welfare groups, they have also been criticized as both outdated by current research, and of little practical effect in improving animal welfare.

The scientists and engineers at Harvard's Wyss Institute have created "organs-on-a-chip", including the "lung-on-a-chip" and "gut-on-a-chip". Researchers at cellasys in Germany developed a "skin-on-a-chip". These tiny devices contain human cells in a 3-dimensional system that mimics human organs. The chips can be used instead of animals in in vitro disease research, drug testing, and toxicity testing. Researchers have also begun using 3-D bioprinters to create human tissues for in vitro testing.

Another non-animal research method is in silico or computer simulation and mathematical modeling which seeks to investigate and ultimately predict toxicity and drug affects in humans without using animals. This is done by investigating test compounds on a molecular level using recent advances in technological capabilities with the ultimate goal of creating treatments unique to each patient.

Microdosing is another alternative to the use of animals in experimentation. Microdosing is a process whereby volunteers are administered a small dose of a test compound allowing researchers to investigate its pharmacological affects without harming the volunteers. Microdosing can replace the use of animals in pre-clinical drug screening and can reduce the number of animals used in safety and toxicity testing.

Additional alternative methods include positron emission tomography (PET), which allows scanning of the human brain in vivo, and comparative epidemiological studies of disease risk factors among human populations.

Simulators and computer programs have also replaced the use of animals in dissection, teaching and training exercises.

Official bodies such as the European Centre for the Validation of Alternative Test Methods of the European Commission, the Interagency Coordinating Committee for the Validation of Alternative Methods in the US, ZEBET in Germany, and the Japanese Center for the Validation of Alternative Methods (among others) also promote and disseminate the 3Rs. These bodies are mainly driven by responding to regulatory requirements, such as supporting the cosmetics testing ban in the EU by validating alternative methods.

The European Partnership for Alternative Approaches to Animal Testing serves as a liaison between the European Commission and industries. The European Consensus Platform for Alternatives coordinates efforts amongst EU member states.

Academic centers also investigate alternatives, including the Center for Alternatives to Animal Testing at the Johns Hopkins University and the NC3Rs in the UK.

BRAIN Initiative

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/BRAIN_Initiative

The White House BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies), is a collaborative, public-private research initiative announced by the Obama administration on April 2, 2013, with the goal of supporting the development and application of innovative technologies that can create a dynamic understanding of brain function.

This activity is a Grand Challenge focused on revolutionizing our understanding of the human brain, and was developed by the White House Office of Science and Technology Policy (OSTP) as part of a broader White House Neuroscience Initiative. Inspired by the Human Genome Project, BRAIN aims to help researchers uncover the mysteries of brain disorders, such as Alzheimer's and Parkinson's diseases, depression, and traumatic brain injury (TBI).

Participants in BRAIN and affiliates of the project include DARPA and IARPA as well as numerous private companies, universities, and other organizations in the United States, Australia, Canada, and Denmark.

Background

The BRAIN Initiative reflects a number of influences, stemming back over a decade. Some of these include: planning meetings at the National Institutes of Health that led to the NIH's Blueprint for Neuroscience Research; workshops at the National Science Foundation (NSF) on cognition, neuroscience, and convergent science, including a 2006 report on "Grand Challenges of Mind and Brain"; reports from the National Research Council and the Institute of Medicine's Forum on Neuroscience and Nervous System Disorders, including "From Molecules to Mind: Challenges for the 21st Century," a report of a June 25, 2008 Workshop on Grand Challenges in Neuroscience; years of research and reports from scientists and professional societies; and congressional interest.

One important activity was the Brain Activity Map Project. In September 2011, molecular biologist Miyoung Chun of The Kavli Foundation organized a conference in London, at which scientists first put forth the idea of such a project. At subsequent meetings, scientists from US government laboratories, including members of the Office of Science and Technology Policy, and from the Howard Hughes Medical Institute and the Allen Institute for Brain Science, along with representatives from Google, Microsoft, and Qualcomm, discussed possibilities for a future government-led project.

Other influences included the interdisciplinary "Decade of the Mind" project led by James L. Olds, who is currently the Assistant Director for Biological Sciences at NSF, and the "Revolutionizing Prosthetics" project at DARPA, led by Dr. Geoffrey Ling and shown on 60 Minutes in April 2009.

Development of the plan for the BRAIN Initiative within the Executive Office of the President (EOP) was led by OSTP and included the following EOP staff: Philip Rubin, then Principal Assistant Director for Science and leader of the White House Neuroscience Initiative; Thomas Kalil, Deputy Director for Technology and Innovation; Cristin Dorgelo, then Assistant Director for Grand Challenges, and later Chief of Staff at OSTP; and Carlos Peña, Assistant Director for Emerging Technologies and currently the Division Director for the Division of Neurological and Physical Medicine Devices, in the Office of Device Evaluation, Center for Devices and Radiological Health (CDRH), at the U.S. Food and Drug Administration (FDA).

Announcement

NIH Director Dr. Francis Collins and President Barack Obama announcing the BRAIN Initiative

On April 2, 2013, at a White House event, President Barack Obama announced The BRAIN Initiative, with proposed initial expenditures for fiscal year 2014 of approximately $110 million from the Defense Advanced Research Projects Agency (DARPA), the National Institutes of Health (NIH), and the National Science Foundation (NSF). The President also directed the Presidential Commission for the Study of Bioethical Issues to explore the ethical, legal, and societal implications raised by the initiative and by neuroscience in general. Additional commitments were also made by the Allen Institute for Brain Science, the Howard Hughes Medical Institute, and The Kavli Foundation. The NIH also announced the creation of a working group of the Advisory Committee to the Director, led by neuroscientists Cornelia Bargmann and William Newsome and with ex officio participation from DARPA and NSF, to help shape NIH's role in the BRAIN Initiative. NSF planned to receive advice from its directorate advisory committees, from the National Science Board, and from a series of meetings bringing together scientists in neuroscience and related areas.

Experimental approaches

News reports said the research would map the dynamics of neuron activity in mice and other animals and eventually the tens of billions of neurons in the human brain.

In a 2012 scientific commentary outlining experimental plans for a more limited project, Alivisatos et al. outlined a variety of specific experimental techniques that might be used to achieve what they termed a "functional connectome", as well as new technologies that will have to be developed in the course of the project. They indicated that initial studies might be done in Caenorhabditis elegans, followed by Drosophila, because of their comparatively simple neural circuits. Mid-term studies could be done in zebrafish, mice, and the Etruscan shrew, with studies ultimately to be done in primates and humans. They proposed the development of nanoparticles that could be used as voltage sensors that would detect individual action potentials, as well as nanoprobes that could serve as electrophysiological multielectrode arrays. In particular, they called for the use of wireless, noninvasive methods of neuronal activity detection, either utilizing microelectronic very-large-scale integration, or based on synthetic biology rather than microelectronics. In one such proposed method, enzymatically produced DNA would serve as a "ticker tape record" of neuronal activity, based on calcium ion-induced errors in coding by DNA polymerase. Data would be analyzed and modeled by large scale computation. A related technique proposed the use of high-throughput DNA sequencing for rapidly mapping neural connectivity.

Timeline

The timeline proposed by the Working Group in 2014 is:

  • 2016–2020: technology development and validation
  • 2020–2025: application of those technologies in an integrated fashion to make fundamental new discoveries about the brain

Working group

The advisory committee is:

Participants

As of December 2018, the BRAIN Initiative website lists the following participants and affiliates:

  • National Institutes of Health (Alliance Member)
  • National Science Foundation (Alliance Member)
  • U.S. Food and Drug Administration (Alliance Member)
  • Intelligence Advanced Research Projects Activity (IARPA) (Alliance Member)
  • White House BRAIN Initiative (Alliance Affiliate)
  • Defense Advanced Research Projects Agency (B.I. Participant)
  • Simons Foundation (Alliance Member)
  • National Photonics Initiative (B.I. Participant)
  • Allen Institute for Brain Science (Alliance Member)
  • Janelia/Howard Hughes Medical Institute (Alliance Affiliate)
  • Neurotechnology Architecting Network (B.I. Participant)
  • Pacific Northwest Neuroscience Neighborhood (B.I. Participant)
  • University of California System Cal-BRAIN (B.I. Participant)
  • University of Pittsburgh Brain Institute (B.I. Participant)
  • Blackrock Microsystems (B.I. Participant)
  • GlaxoSmithKline (B.I. Participant)
  • Brain & Behavior Research Foundation (B.I. Participant)
  • Boston University Center for Systems Neuroscience (B.I. Participant)
  • General Electric (B.I. Participant)
  • Boston Scientific (B.I. Participant)
  • Carnegie Mellon University BrainHub (B.I. Participant)
  • NeuroNexus (B.I. Participant)
  • Medtronic (B.I. Participant)
  • Pediatric Brain Foundation (B.I. Participant)
  • University of Texas System UT Neuroscience (B.I. Participant)
  • University of Arizona Center for Innovation in Brain Science (B.I. Participant)
  • Salk Institute for Biological Studies (B.I. Participant)
  • Second Sight (B.I. Participant)
  • Kavli Foundation (Alliance Member)
  • University of Utah Neurosciences Gateway (B.I. Participant)
  • Blackrock Microsystems (B.I. Participant)
  • Ripple (B.I. Participant)
  • Lawrence Livermore National Laboratory (B.I. Participant)
  • NeuroPace (B.I. Participant)
  • Google (B.I. Participant)
  • Inscopix (B.I. Participant)
  • Australian National Health and Medical Research Council (B.I. Participant)
  • Brain Canada Foundation (B.I. Participant)
  • Denmark's Lundbeck Foundation (B.I. Participant).

Reactions

Scientists offered differing views of the plan. Neuroscientist John Donoghue said that the project would fill a gap in neuroscience research between, on the one hand, activity measurements at the level of brain regions using methods such as fMRI, and, on the other hand, measurements at the level of single cells. Psychologist Ed Vul expressed concern, however, that the initiative would divert funding from individual investigator studies. Neuroscientist Donald Stein expressed concern that it would be a mistake to begin by spending money on technological methods, before knowing exactly what would be measured. Physicist Michael Roukes argued instead that methods in nanotechnology are becoming sufficiently mature to make the time right for a brain activity map. Neuroscientist Rodolfo Llinás declared at the first Rockefeller meeting "What has happened here is magnificent, never before in neuroscience have I seen so much unity in such a glorious purpose."

The projects face great logistical challenges. Neuroscientists estimated that the project would generate 300 exabytes of data every year, presenting a significant technical barrier. Most of the available high-resolution brain activity monitors are of limited use, as they must be invasively implanted surgically by opening the skull. Parallels have been drawn to past large-scale government-led research efforts including the map of the human genome, the voyage to the moon, and the development of the atomic bomb.

Computational cognition

From Wikipedia, the free encyclopedia

Computational cognition (sometimes referred to as computational cognitive science or computational psychology) is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.

Artificial intelligence

There are two main purposes for the productions of artificial intelligence: to produce intelligent behaviors regardless of the quality of the results, and to model after intelligent behaviors found in nature. In the beginning of its existence, there was no need for artificial intelligence to emulate the same behavior as human cognition. Until 1960s, economist Herbert Simon and Allen Newell attempted to formalize human problem-solving skills by using the results of psychological studies to develop programs that implement the same problem-solving techniques as people would. Their works laid the foundation for symbolic AI and computational cognition, and even some advancements for cognitive science and cognitive psychology.

The field of symbolic AI is based on the physical symbol systems hypothesis by Simon and Newell, which states that expressing aspects of cognitive intelligence can be achieved through the manipulation of symbols. However, John McCarthy focused more on the initial purpose of artificial intelligence, which is to break down the essence of logical and abstract reasoning regardless of whether or not human employs the same mechanism.

Over the next decades, the progress made in artificial intelligence started to be focused more on developing logic-based and knowledge-based programs, veering away from the original purpose of symbolic AI. Researchers started to believe that symbolic artificial intelligence might never be able to imitate some intricate processes of human cognition like perception or learning. The then perceived impossibility (since refuted ) of implementing emotion in AI, was seen to be a stumbling block on the path to achieving human-like cognition with computers. Researchers began to take a “sub-symbolic” approach to create intelligence without specifically representing that knowledge. This movement led to the emerging discipline of computational modeling, connectionism, and computational intelligence.

Computational modeling

As it contributes more to the understanding of human cognition than artificial intelligence, computational cognitive modeling emerged from the need to define various cognition functionalities (like motivation, emotion, or perception) by representing them in computational models of mechanisms and processes. Computational models study complex systems through the use of algorithms of many variables and extensive computational resources to produce computer simulation. Simulation is achieved by adjusting the variables, changing one alone or even combining them together, to observe the effect on the outcomes. The results help experimenters make predictions about what would happen in the real system if those similar changes were to occur.

When computational models attempt to mimic human cognitive functioning, all the details of the function must be known for them to transfer and display properly through the models, allowing researchers to thoroughly understand and test an existing theory because no variables are vague and all variables are modifiable. Consider a model of memory built by Atkinson and Shiffrin in 1968, it showed how rehearsal leads to long-term memory, where the information being rehearsed would be stored. Despite the advancement it made in revealing the function of memory, this model fails to provide answers to crucial questions like: how much information can be rehearsed at a time? How long does it take for information to transfer from rehearsal to long-term memory? Similarly, other computational models raise more questions about cognition than they answer, making their contributions much less significant for the understanding of human cognition than other cognitive approaches. An additional shortcoming of computational modeling is its reported lack of objectivity.

John Anderson in his Adaptive Control of Thought-Rational (ACT-R) model uses the functions of computational models and the findings of cognitive science. The ACT-R model is based on the theory that the brain consists of several modules which perform specialized functions separate of each other. The ACT-R model is classified as a symbolic approach to cognitive science.

Connectionist networks

Another approach which deals more with the semantic content of cognitive science is connectionism or neural network modeling. Connectionism relies on the idea that the brain consists of simple units or nodes and the behavioral response comes primarily from the layers of connections between the nodes and not from the environmental stimulus itself.

Connectionist network differs from computational modeling specifically because of two functions: neural back-propagation and parallel-processing. Neural back-propagation is a method utilized by connectionist networks to show evidence of learning. After a connectionist network produce a response, the simulated results are compared to real-life situational results. The feedback provided by the backward propagation of errors would be used to improve accuracy for the network's subsequent responses. The second function, parallel-processing, stemmed from the belief that knowledge and perception are not limited to specific modules but rather are distributed throughout the cognitive networks. The present of parallel distributed processing has been shown in psychological demonstrations like the Stroop effect, where the brain seems to be analyzing the perception of color and meaning of language at the same time. However, this theoretical approach has been continually disproved because the two cognitive functions for color-perception and word-forming are operating separately and simultaneously, not parallel of each other.

The field of cognition may have benefitted from the use of connectionist network but setting up the neural network models can be quite a tedious task and the results may be less interpretable than the system they are trying to model. Therefore, the results may be used as evidence for a broad theory of cognition without explaining the particular process happening within the cognitive function. Other disadvantages of connectionism lie in the research methods it employs or hypothesis it tests as they have been proven inaccurate or ineffective often, taking connectionist models away from an accurate representation of how the brain functions. These issues cause neural network models to be ineffective on studying higher forms of information-processing, and hinder connectionism from advancing the general understanding of human cognition.

Cognitive model

From Wikipedia, the free encyclopedia

A cognitive model is an approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).

Relationship to cognitive architectures

Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. In contrast to cognitive architectures, cognitive models tend to be focused on a single cognitive phenomenon or process (e.g., list learning), how two or more processes interact (e.g., visual search bsc1780 decision making), or making behavioral predictions for a specific task or tool (e.g., how instituting a new software package will affect productivity). Cognitive architectures tend to be focused on the structural properties of the modeled system, and help constrain the development of cognitive models within the architecture. Likewise, model development helps to inform limitations and shortcomings of the architecture. Some of the most popular architectures for cognitive modeling include ACT-R, Clarion, LIDA, and Soar.

History

Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence among others.

Box-and-arrow models

A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Typically, they are used by speech pathologists while treating a child patient. The input signal is the speech signal heard by the child, usually assumed to come from an adult speaker. The output signal is the utterance produced by the child. The unseen psychological events that occur between the arrival of an input signal and the production of speech are the focus of psycholinguistic models. Events that process the input signal are referred to as input processes, whereas events that process the production of speech are referred to as output processes. Some aspects of speech processing are thought to happen online—that is, they occur during the actual perception or production of speech and thus require a share of the attentional resources dedicated to the speech task. Other processes, thought to happen offline, take place as part of the child's background mental processing rather than during the time dedicated to the speech task. In this sense, online processing is sometimes defined as occurring in real-time, whereas offline processing is said to be time-free (Hewlett, 1990). In box-and-arrow psycholinguistic models, each hypothesized level of representation or processing can be represented in a diagram by a “box,” and the relationships between them by “arrows,” hence the name. Sometimes (as in the models of Smith, 1973, and Menn, 1978, described later in this paper) the arrows represent processes additional to those shown in boxes. Such models make explicit the hypothesized information- processing activities carried out in a particular cognitive function (such as language), in a manner analogous to computer flowcharts that depict the processes and decisions carried out by a computer program. Box-and-arrow models differ widely in the number of unseen psychological processes they describe and thus in the number of boxes they contain. Some have only one or two boxes between the input and output signals (e.g., Menn, 1978; Smith, 1973), whereas others have multiple boxes representing complex relationships between a number of different information-processing events (e.g., Hewlett, 1990; Hewlett, Gibbon, & Cohen- McKenzie, 1998; Stackhouse & Wells, 1997). The most important box, however, and the source of much ongoing debate, is that representing the underlying representation (or UR). In essence, an underlying representation captures information stored in a child's mind about a word he or she knows and uses. As the following description of several models will illustrate, the nature of this information and thus the type(s) of representation present in the child's knowledge base have captured the attention of researchers for some time. (Elise Baker et al. Psycholinguistic Models of Speech Development and Their Application to Clinical Practice. Journal of Speech, Language, and Hearing Research. June 2001. 44. p 685–702.)

Computational models

A computational model is a mathematical model in computational science that requires extensive computational resources to study the behavior of a complex system by computer simulation. The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by changing the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Theories of operation of the model can be derived/deduced from these computational experiments. Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, and neural network models.

Symbolic

A symbolic model is expressed in characters, usually non-numeric ones, that require translation before they can be used.

Subsymbolic

subsymbolic if it is made by constituent entities that are not representations in their turn, e.g., pixels, sound images as perceived by the ear, signal samples; subsymbolic units in neural networks can be considered particular cases of this category

Hybrid

Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. See more details at hybrid intelligent system.

Dynamical systems

In the traditional computational approach, representations are viewed as static structures of discrete symbols. Cognition takes place by transforming static symbol structures in discrete, sequential steps. Sensory information is transformed into symbolic inputs, which produce symbolic outputs that get transformed into motor outputs. The entire system operates in an ongoing cycle.

What is missing from this traditional view is that human cognition happens continuously and in real time. Breaking down the processes into discrete time steps may not fully capture this behavior. An alternative approach is to define a system with (1) a state of the system at any given time, (2) a behavior, defined as the change over time in overall state, and (3) a state set or state space, representing the totality of overall states the system could be in. The system is distinguished by the fact that a change in any aspect of the system state depends on other aspects of the same or other system states.

A typical dynamical model is formalized by several differential equations that describe how the system's state changes over time. By doing so, the form of the space of possible trajectories and the internal and external forces that shape a specific trajectory that unfold over time, instead of the physical nature of the underlying mechanisms that manifest this dynamics, carry explanatory force. On this dynamical view, parametric inputs alter the system's intrinsic dynamics, rather than specifying an internal state that describes some external state of affairs.

Early dynamical systems

Associative memory

Early work in the application of dynamical systems to cognition can be found in the model of Hopfield networks. These networks were proposed as a model for associative memory. They represent the neural level of memory, modeling systems of around 30 neurons which can be in either an on or off state. By letting the network learn on its own, structure and computational properties naturally arise. Unlike previous models, “memories” can be formed and recalled by inputting a small portion of the entire memory. Time ordering of memories can also be encoded. The behavior of the system is modeled with vectors which can change values, representing different states of the system. This early model was a major step toward a dynamical systems view of human cognition, though many details had yet to be added and more phenomena accounted for.

Language acquisition

By taking into account the evolutionary development of the human nervous system and the similarity of the brain to other organs, Elman proposed that language and cognition should be treated as a dynamical system rather than a digital symbol processor. Neural networks of the type Elman implemented have come to be known as Elman networks. Instead of treating language as a collection of static lexical items and grammar rules that are learned and then used according to fixed rules, the dynamical systems view defines the lexicon as regions of state space within a dynamical system. Grammar is made up of attractors and repellers that constrain movement in the state space. This means that representations are sensitive to context, with mental representations viewed as trajectories through mental space instead of objects that are constructed and remain static. Elman networks were trained with simple sentences to represent grammar as a dynamical system. Once a basic grammar had been learned, the networks could then parse complex sentences by predicting which words would appear next according to the dynamical model.

Cognitive development

A classic developmental error has been investigated in the context of dynamical systems: The A-not-B error is proposed to be not a distinct error occurring at a specific age (8 to 10 months), but a feature of a dynamic learning process that is also present in older children. Children 2 years old were found to make an error similar to the A-not-B error when searching for toys hidden in a sandbox. After observing the toy being hidden in location A and repeatedly searching for it there, the 2-year-olds were shown a toy hidden in a new location B. When they looked for the toy, they searched in locations that were biased toward location A. This suggests that there is an ongoing representation of the toy's location that changes over time. The child's past behavior influences its model of locations of the sandbox, and so an account of behavior and learning must take into account how the system of the sandbox and the child's past actions is changing over time.

Locomotion

One proposed mechanism of a dynamical system comes from analysis of continuous-time recurrent neural networks (CTRNNs). By focusing on the output of the neural networks rather than their states and examining fully interconnected networks, three-neuron central pattern generator (CPG) can be used to represent systems such as leg movements during walking. This CPG contains three motor neurons to control the foot, backward swing, and forward swing effectors of the leg. Outputs of the network represent whether the foot is up or down and how much force is being applied to generate torque in the leg joint. One feature of this pattern is that neuron outputs are either off or on most of the time. Another feature is that the states are quasi-stable, meaning that they will eventually transition to other states. A simple pattern generator circuit like this is proposed to be a building block for a dynamical system. Sets of neurons that simultaneously transition from one quasi-stable state to another are defined as a dynamic module. These modules can in theory be combined to create larger circuits that comprise a complete dynamical system. However, the details of how this combination could occur are not fully worked out.

Modern dynamical systems

Behavioral dynamics

Modern formalizations of dynamical systems applied to the study of cognition vary. One such formalization, referred to as “behavioral dynamics”, treats the agent and the environment as a pair of coupled dynamical systems based on classical dynamical systems theory. In this formalization, the information from the environment informs the agent's behavior and the agent's actions modify the environment. In the specific case of perception-action cycles, the coupling of the environment and the agent is formalized by two functions. The first transforms the representation of the agents action into specific patterns of muscle activation that in turn produce forces in the environment. The second function transforms the information from the environment (i.e., patterns of stimulation at the agent's receptors that reflect the environment's current state) into a representation that is useful for controlling the agents actions. Other similar dynamical systems have been proposed (although not developed into a formal framework) in which the agent's nervous systems, the agent's body, and the environment are coupled together.

Adaptive behaviors

Behavioral dynamics have been applied to locomotive behavior. Modeling locomotion with behavioral dynamics demonstrates that adaptive behaviors could arise from the interactions of an agent and the environment. According to this framework, adaptive behaviors can be captured by two levels of analysis. At the first level of perception and action, an agent and an environment can be conceptualized as a pair of dynamical systems coupled together by the forces the agent applies to the environment and by the structured information provided by the environment. Thus, behavioral dynamics emerge from the agent-environment interaction. At the second level of time evolution, behavior can be expressed as a dynamical system represented as a vector field. In this vector field, attractors reflect stable behavioral solutions, where as bifurcations reflect changes in behavior. In contrast to previous work on central pattern generators, this framework suggests that stable behavioral patterns are an emergent, self-organizing property of the agent-environment system rather than determined by the structure of either the agent or the environment.

Open dynamical systems

In an extension of classical dynamical systems theory, rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system.

Embodied cognition

In the context of dynamical systems and embodied cognition, representations can be conceptualized as indicators or mediators. In the indicator view, internal states carry information about the existence of an object in the environment, where the state of a system during exposure to an object is the representation of that object. In the mediator view, internal states carry information about the environment which is used by the system in obtaining its goals. In this more complex account, the states of the system carries information that mediates between the information the agent takes in from the environment, and the force exerted on the environment by the agents behavior. The application of open dynamical systems have been discussed for four types of classical embodied cognition examples:

  1. Instances where the environment and agent must work together to achieve a goal, referred to as "intimacy". A classic example of intimacy is the behavior of simple agents working to achieve a goal (e.g., insects traversing the environment). The successful completion of the goal relies fully on the coupling of the agent to the environment.
  2. Instances where the use of external artifacts improves the performance of tasks relative to performance without these artifacts. The process is referred to as "offloading". A classic example of offloading is the behavior of Scrabble players; people are able to create more words when playing Scrabble if they have the tiles in front of them and are allowed to physically manipulate their arrangement. In this example, the Scrabble tiles allow the agent to offload working memory demands on to the tiles themselves.
  3. Instances where a functionally equivalent external artifact replaces functions that are normally performed internally by the agent, which is a special case of offloading. One famous example is that of human (specifically the agents Otto and Inga) navigation in a complex environment with or without assistance of an artifact.
  4. Instances where there is not a single agent. The individual agent is part of larger system that contains multiple agents and multiple artifacts. One famous example, formulated by Ed Hutchins in his book Cognition in the Wild, is that of navigating a naval ship.

The interpretations of these examples rely on the following logic: (1) the total system captures embodiment; (2) one or more agent systems capture the intrinsic dynamics of individual agents; (3) the complete behavior of an agent can be understood as a change to the agent's intrinsic dynamics in relation to its situation in the environment; and (4) the paths of an open dynamical system can be interpreted as representational processes. These embodied cognition examples show the importance of studying the emergent dynamics of an agent-environment systems, as well as the intrinsic dynamics of agent systems. Rather than being at odds with traditional cognitive science approaches, dynamical systems are a natural extension of these methods and should be studied in parallel rather than in competition.

Multiple drafts model

From Wikipedia, the free encyclopedia

Daniel Dennett's multiple drafts model of consciousness is a physicalist theory of consciousness based upon cognitivism, which views the mind in terms of information processing. The theory is described in depth in his book, Consciousness Explained, published in 1991. As the title states, the book proposes a high-level explanation of consciousness which is consistent with support for the possibility of strong AI.

Dennett describes the theory as first-person operationalism. As he states it:

The Multiple Drafts model makes [the procedure of] "writing it down" in memory criterial for consciousness: that is what it is for the "given" to be "taken" ... There is no reality of conscious experience independent of the effects of various vehicles of content on subsequent action (and hence, of course, on memory).

The thesis of multiple drafts

Dennett's thesis is that our modern understanding of consciousness is unduly influenced by the ideas of René Descartes. To show why, he starts with a description of the phi illusion. In this experiment, two different coloured lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change colour as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change colour before the second light is observed.

Dennett claims that conventional explanations of the colour change boil down to either Orwellian or Stalinesque hypotheses, which he says are the result of Descartes' continued influence on our vision of the mind. In an Orwellian hypothesis, the subject comes to one conclusion, then goes back and changes that memory in light of subsequent events. This is akin to George Orwell's Nineteen Eighty-Four, where records of the past are routinely altered. In a Stalinesque hypothesis, the two events would be reconciled prior to entering the subject's consciousness, with the final result presented as fully resolved. This is akin to Joseph Stalin's show trials, where the verdict has been decided in advance and the trial is just a rote presentation.

[W]e can suppose, both theorists have exactly the same theory of what happens in your brain; they agree about just where and when in the brain the mistaken content enters the causal pathways; they just disagree about whether that location is to be deemed pre-experiential or post-experiential. ... [T]hey even agree about how it ought to "feel" to subjects: Subjects should be unable to tell the difference between misbegotten experiences and immediately misremembered experiences. [p. 125, original emphasis.]

Dennett argues that there is no principled basis for picking one of these theories over the other, because they share a common error in supposing that there is a special time and place where unconscious processing becomes consciously experienced, entering into what Dennett calls the "Cartesian theatre". Both theories require us to cleanly divide a sequence of perceptions and reactions into before and after the instant that they reach the seat of consciousness, but he denies that there is any such moment, as it would lead to infinite regress. Instead, he asserts that there is no privileged place in the brain where consciousness happens. Dennett states that, "[t]here does not exist ... a process such as 'recruitment of consciousness' (into what?), nor any place where the 'vehicle's arrival' is recognized (by whom?)"

Cartesian materialism is the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of "presentation" in experience because what happens there is what you are conscious of. ... Many theorists would insist that they have explicitly rejected such an obviously bad idea. But ... the persuasive imagery of the Cartesian Theater keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized. [p. 107, original emphasis.]

With no theatre, there is no screen, hence no reason to re-present data after it has already been analysed. Dennett says that, "the Multiple Drafts model goes on to claim that the brain does not bother 'constructing' any representations that go to the trouble of 'filling in' the blanks. That would be a waste of time and (shall we say?) paint. The judgement is already in so we can get on with other tasks!"

According to the model, there are a variety of sensory inputs from a given event and also a variety of interpretations of these inputs. The sensory inputs arrive in the brain and are interpreted at different times, so a given event can give rise to a succession of discriminations, constituting the equivalent of multiple drafts of a story. As soon as each discrimination is accomplished, it becomes available for eliciting a behaviour; it does not have to wait to be presented at the theatre.

Like a number of other theories, the Multiple Drafts model understands conscious experience as taking time to occur, such that percepts do not instantaneously arise in the mind in their full richness. The distinction is that Dennett's theory denies any clear and unambiguous boundary separating conscious experiences from all other processing. According to Dennett, consciousness is to be found in the actions and flows of information from place to place, rather than some singular view containing our experience. There is no central experiencer who confers a durable stamp of approval on any particular draft.

Different parts of the neural processing assert more or less control at different times. For something to reach consciousness is akin to becoming famous, in that it must leave behind consequences by which it is remembered. To put it another way, consciousness is the property of having enough influence to affect what the mouth will say and the hands will do. Which inputs are "edited" into our drafts is not an exogenous act of supervision, but part of the self-organizing functioning of the network, and at the same level as the circuitry that conveys information bottom-up.

The conscious self is taken to exist as an abstraction visible at the level of the intentional stance, akin to a body of mass having a "centre of gravity". Analogously, Dennett refers to the self as the "centre of narrative gravity", a story we tell ourselves about our experiences. Consciousness exists, but not independently of behaviour and behavioural disposition, which can be studied through heterophenomenology.

The origin of this operationalist approach can be found in Dennett's immediately preceding work. Dennett (1988) explains consciousness in terms of access consciousness alone, denying the independent existence of what Ned Block has labeled phenomenal consciousness. He argues that "Everything real has properties, and since I don't deny the reality of conscious experience, I grant that conscious experience has properties". Having related all consciousness to properties, he concludes that they cannot be meaningfully distinguished from our judgements about them. He writes:

The infallibilist line on qualia treats them as properties of one's experience one cannot in principle misdiscover, and this is a mysterious doctrine (at least as mysterious as papal infallibility) unless we shift the emphasis a little and treat qualia as logical constructs out of subjects' qualia-judgments: a subject's experience has the quale F if and only if the subject judges his experience to have quale F. We can then treat such judgings as constitutive acts, in effect, bringing the quale into existence by the same sort of license as novelists have to determine the hair color of their characters by fiat. We do not ask how Dostoevski knows that Raskolnikov's hair is light brown.

In other words, once we've explained a perception fully in terms of how it affects us, there is nothing left to explain. In particular, there is no such thing as a perception which may be considered in and of itself (a quale). Instead, the subject's honest reports of how things seem to them are inherently authoritative on how things seem to them, but not on the matter of how things actually are.

So when we look one last time at our original characterization of qualia, as ineffable, intrinsic, private, directly apprehensible properties of experience, we find that there is nothing to fill the bill. In their place are relatively or practically ineffable public properties we can refer to indirectly via reference to our private property-detectors—private only in the sense of idiosyncratic. And insofar as we wish to cling to our subjective authority about the occurrence within us of states of certain types or with certain properties, we can have some authority—not infallibility or incorrigibility, but something better than sheer guessing—but only if we restrict ourselves to relational, extrinsic properties like the power of certain internal states of ours to provoke acts of apparent re-identification. So contrary to what seems obvious at first blush, there simply are no qualia at all.

The key to the multiple drafts model is that, after removing qualia, explaining consciousness boils down to explaining the behaviour we recognise as conscious. Consciousness is as consciousness does.

Critical responses

Some of the criticism of Dennett's theory is due to the perceived tone of his presentation. As one grudging supporter admits, "there is much in this book that is disputable. And Dennett is at times aggravatingly smug and confident about the merits of his arguments ... All in all Dennett's book is annoying, frustrating, insightful, provocative and above all annoying" (Korb 1993).

Bogen (1992) points out that the brain is bilaterally symmetrical. That being the case, if Cartesian materialism is true, there might be two Cartesian theatres, so arguments against only one are flawed. Velmans (1992) argues that the phi effect and the cutaneous rabbit illusion demonstrate that there is a delay whilst modelling occurs and that this delay was discovered by Libet.

It has also been claimed that the argument in the multiple drafts model does not support its conclusion.

"Straw man"

Much of the criticism asserts that Dennett's theory attacks the wrong target, failing to explain what it claims to. Chalmers (1996) maintains that Dennett has produced no more than a theory of how subjects report events. Some even parody the title of the book as "Consciousness Explained Away", accusing him of greedy reductionism. Another line of criticism disputes the accuracy of Dennett's characterisations of existing theories:

The now standard response to Dennett's project is that he has picked a fight with a straw man. Cartesian materialism, it is alleged, is an impossibly naive account of phenomenal consciousness held by no one currently working in cognitive science or the philosophy of mind. Consequently, whatever the effectiveness of Dennett's demolition job, it is fundamentally misdirected (see, e.g., Block, 1993, 1995; Shoemaker, 1993; and Tye, 1993).

Unoriginality

Multiple drafts is also attacked for making a claim to novelty. It may be the case, however, that such attacks mistake which features Dennett is claiming as novel. Korb states that, "I believe that the central thesis will be relatively uncontentious for most cognitive scientists, but that its use as a cleaning solvent for messy puzzles will be viewed less happily in most quarters." (Korb 1993) In this way, Dennett uses uncontroversial ideas towards more controversial ends, leaving him open to claims of unoriginality when uncontroversial parts are focused upon.

Even the notion of consciousness as drafts is not unique to Dennett. According to Hankins, Dieter Teichert suggests that Paul Ricoeur's theories agree with Dennett's on the notion that "the self is basically a narrative entity, and that any attempt to give it a free-floating independent status is misguided." [Hankins] Others see Derrida's (1982) representationalism as consistent with the notion of a mind that has perceptually changing content without a definitive present instant.

To those who believe that consciousness entails something more than behaving in all ways conscious, Dennett's view is seen as eliminativist, since it denies the existence of qualia and the possibility of philosophical zombies. However, Dennett is not denying the existence of the mind or of consciousness, only what he considers a naive view of them. The point of contention is whether Dennett's own definitions are indeed more accurate: whether what we think of when we speak of perceptions and consciousness can be understood in terms of nothing more than their effect on behaviour.

Information processing and consciousness

The role of information processing in consciousness has been criticised by John Searle who, in his Chinese room argument, states that he cannot find anything that could be recognised as conscious experience in a system that relies solely on motions of things from place to place. Dennett sees this argument as misleading, arguing that consciousness is not to be found in a specific part of the system, but in the actions of the whole. In essence, he denies that consciousness requires something in addition to capacity for behaviour, saying that philosophers such as Searle, "just can't imagine how understanding could be a property that emerges from lots of distributed quasi-understanding in a large system" (p. 439).

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...