Search This Blog

Wednesday, May 12, 2021

Philosophy of artificial intelligence

The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.

The philosophy of artificial intelligence attempts to answer such questions as follows:

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same sense that a human being can? Can it feel how things are?

Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include some of the following:

  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."
  • Allen Newell and Herbert A. Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
  • John Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."

Can a machine display general intelligence?

Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking.

The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956:

  • "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."

Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.

It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge eliminates the need for a precise description all together.

The first step to answering the question is to clearly define "intelligence".

Intelligence

The "standard interpretation" of the Turing test.

Turing test

Alan Turing reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". Turing's test extends this polite convention to machines:

  • If a machine acts as intelligently as a human being, then it is as intelligent as a human being.

One criticism of the Turing test is that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J. Russell and Peter Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'".

Intelligent agent definition

Simple reflex agent

Twenty-first century AI research defines intelligence in terms of intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent. 

  • "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent."

Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes or the ability to be insulted.  They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence. 

Arguments that a machine can display general intelligence

The brain can be simulated

An MRI scan of a normal adult human brain

Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". This argument, first introduced as early as 1943 and vividly described by Hans Moravec in 1988, is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005 and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.

Few disagree that a brain simulation is possible in theory, even critics of AI such as Hubert Dreyfus and John Searle. However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. Thus, merely mimicking the functioning of a brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind.

Human thinking is symbol processing

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:

  • "A physical symbol system has the necessary and sufficient means of general intelligent action."

This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":

  • "The mind can be viewed as a device operating on bits of information according to formal rules."

The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high level — symbols that directly correspond with objects in the world, such as <dog> and <tail>. Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed.

Arguments against symbol processing

These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required.

Gödelian anti-mechanist arguments

In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.) More speculatively, Gödel conjectured that the human mind can correctly eventually determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism. Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument. Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement). This is provably impossible for a Turing machine (and, by an informal extension, any known type of mechanical computer) to do; therefore, the Gödelian concludes that human reasoning is too powerful to be captured in a machine.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate. This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."

More pragmatically, Russell and Norvig note that Gödel's argument only applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to prove everything in order to be intelligent.

Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider:

  • Lucas can't assert the truth of this statement.

This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.

After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. By Penrose and Lucas's arguments, existing quantum computers are not sufficient, so Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.

Dreyfus: the primacy of implicit skills

Hubert Dreyfus argued that human intelligence and expertise depended primarily on implicit skill rather than explicit symbolic manipulation, and argued that these skills would never be captured in formal rules.

Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior." Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."

Can a machine have a mind, consciousness, and mental states?

This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI":

  • A physical symbol system can have a mind and mental states.

Searle distinguished this position from what he called "weak AI":

  • A physical symbol system can act intelligently.

Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.

Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."

There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence."

Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".

Consciousness, minds, mental states, meaning

The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind. Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience," "self-awareness" or "ghost" - as in the Ghost in the Shell manga and anime series - to describe this essential human property). For others, the words "mind" or "consciousness" are used as a kind of secular synonym for the soul.

For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we know something, or mean something or understand something. "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle. What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem." A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person?

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

Arguments that a computer cannot have a mind and mental states

Searle's Chinese room

John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind.

Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains." He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds."

Related arguments: Leibniz' mill, Davis's telephone exchange, Block's Chinese nation and Blockhead

Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill. In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym". Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program.

Responses to the Chinese room

Responses to the Chinese room emphasize several different points.

  • The systems reply and the virtual mind reply: This reply argues that the system, including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be two minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor).
  • Speed, power and complexity replies: Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
  • Robot reply: To truly understand, some believe the Chinese Room needs eyes and hands. Hans Moravec writes: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."
  • Brain simulator reply: What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.
  • Other minds reply and the epiphenomena reply: Several people have noted that Searle's argument is just a version of the problem of other minds, applied to machines. Since it is difficult to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer the same question about machines.
A related question is whether "consciousness" (as Searle understands it) exists. Searle argues that the experience of consciousness can't be detected by examining the behavior of a machine, a human being or any other animal. Daniel Dennett points out that natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) can't be produced by natural selection. Therefore either natural selection did not produce consciousness, or "strong AI" is correct in that consciousness can be detected by suitably designed Turing test.

Is thinking a kind of computation?

The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program and a computer. The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). The latest version is associated with philosophers Hilary Putnam and Jerry Fodor.

This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):

  • Reasoning is nothing but reckoning.

In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):

  • Mental states are just implementations of (the right) computer programs.

This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad).

Other related questions

Can a machine have emotions?

If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people". Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love." Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."

Can a machine be self-aware?

"Self-awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger. Though arguably self-awareness often presumes a bit more capability; a machine that can ascribe meaning in some way to not only its own state but in general postulating questions without solid answers: the contextual nature of its existence now; how it compares to past states or plans for the future, the limits and value of its work product, how it perceives its performance to be valued-by or compared to others.

Can a machine be original or creative?

Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest. He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways. It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned.

In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings. Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.

Can a machine be benevolent or hostile?

This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form.

The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Machine Intelligence Research Institute). The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind; see Artificial intelligence in fiction.

One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.

The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.

Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.

Can a machine imitate all human characteristics?

Turing said "It is customary... to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. ... I cannot offer any such comfort, for I believe that no such bounds can be set."

Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression." All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.

Can a machine have a soul?

Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes

In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

Views on the role of philosophy

Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated. Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress.

Bionics

From Wikipedia, the free encyclopedia

Robot behaviour (bottom) modeled after that of a cockroach (top) and a gecko (middle).

Bionics or biologically inspired engineering is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology.

The word bionic, coined by Jack E. Steele in August 1958, is a portmanteau from biology and electronics that was popularized by the 1970s U.S. television series The Six Million Dollar Man and The Bionic Woman, both based upon the novel Cyborg by Martin Caidin. All three stories feature humans given various superhuman powers by their electromechanical implants.

According to proponents of bionic technology, the transfer of technology between lifeforms and manufactured objects is desirable because evolutionary pressure typically forces living organisms--fauna and flora--to become optimized and efficient. For example, dirt- and water-repellent paint (coating) developed from the observation that practically nothing sticks to the surface of the lotus flower plant (the lotus effect).

The term "biomimetic" is preferred for references to chemical reactions, such as reactions that, in nature, involve biological macromolecules (e.g., enzymes or nucleic acids) whose chemistry can be replicated in vitro using much smaller molecules.

Examples of bionics in engineering include the hulls of boats imitating the thick skin of dolphins; sonar, radar, and medical ultrasound imaging imitating animal echolocation.

In the field of computer science, the study of bionics has produced artificial neurons, artificial neural networks, and swarm intelligence. Evolutionary computation was also motivated by bionics ideas but it took the idea further by simulating evolution in silico and producing well-optimized solutions that had never appeared in nature.

It is estimated by Julian Vincent, professor of biomimetics at the University of Bath's Department of Mechanical Engineering, that "at present there is only a 12% overlap between biology and technology in terms of the mechanisms used".

History

The name "biomimetics" was coined by Otto Schmitt in the 1950s. The term "bionics" was coined by Jack E. Steele in August 1958 while working at the Aeronautics Division House at Wright-Patterson Air Force Base in Dayton, Ohio. However, terms like biomimicry or biomimetics are more preferred in the technology world in efforts to avoid confusion between the medical term "bionics." Coincidentally, Martin Caidin used the word for his 1972 novel Cyborg, which inspired the series The Six Million Dollar Man. Caidin was a long-time aviation industry writer before turning to fiction full-time.

Methods

Velcro was inspired by the tiny hooks found on the surface of burs.

The study of bionics often emphasizes implementing a function found in nature rather than imitating biological structures. For example, in computer science, cybernetics tries to model the feedback and control mechanisms that are inherent in intelligent behavior, while artificial intelligence tries to model the intelligent function regardless of the particular way it can be achieved.

The conscious copying of examples and mechanisms from natural organisms and ecologies is a form of applied case-based reasoning, treating nature itself as a database of solutions that already work. Proponents argue that the selective pressure placed on all natural life forms minimizes and removes failures.

Although almost all engineering could be said to be a form of biomimicry, the modern origins of this field are usually attributed to Buckminster Fuller and its later codification as a house or field of study to Janine Benyus.

There are generally three biological levels in the fauna or flora, after which technology can be modeled:

Examples

  • In robotics, bionics and biomimetics are used to apply the way animals move to the design of robots. BionicKangaroo was based on the movements and physiology of kangaroos.
  • Velcro is the most famous example of biomimetics. In 1948, the Swiss engineer George de Mestral was cleaning his dog of burrs picked up on a walk when he realized how the hooks of the burrs clung to the fur.
  • The horn-shaped, saw-tooth design for lumberjack blades used at the turn of the 19th century to cut down trees when it was still done by hand was modeled after observations of a wood-burrowing beetle. It revolutionized the industry because the blades worked so much faster at felling trees.
  • Cat's eye reflectors were invented by Percy Shaw in 1935 after studying the mechanism of cat eyes. He had found that cats had a system of reflecting cells, known as tapetum lucidum, which was capable of reflecting the tiniest bit of light.
  • Leonardo da Vinci's flying machines and ships are early examples of drawing from nature in engineering.
  • Resilin is a replacement for rubber that has been created by studying the material also found in arthropods.
  • Julian Vincent drew from the study of pinecones when he developed in 2004 "smart" clothing that adapts to changing temperatures. "I wanted a nonliving system which would respond to changes in moisture by changing shape", he said. "There are several such systems in plants, but most are very small – the pinecone is the largest and therefore the easiest to work on". Pinecones respond to higher humidity by opening their scales (to disperse their seeds). The "smart" fabric does the same thing, opening up when the wearer is warm and sweating, and shutting tight when cold.
  • "Morphing aircraft wings" that change shape according to the speed and duration of flight were designed in 2004 by biomimetic scientists from Penn State University. The morphing wings were inspired by different bird species that have differently shaped wings according to the speed at which they fly. In order to change the shape and underlying structure of the aircraft wings, the researchers needed to make the overlying skin also be able to change, which their design does by covering the wings with fish-inspired scales that could slide over each other. In some respects this is a refinement of the swing-wing design.
Lotus leaf surface, rendered: microscopic view
  • Some paints and roof tiles have been engineered to be self-cleaning by copying the mechanism from the Nelumbo lotus.
  • Cholesteric liquid crystals (CLCs) are the thin-film material often used to fabricate fish tank thermometers or mood rings, that change color with temperature changes. They change color because their molecules are arranged in a helical or chiral arrangement and with temperature the pitch of that helical structure changes, reflecting different wavelengths of light. Chiral Photonics, Inc. has abstracted the self-assembled structure of the organic CLCs to produce analogous optical devices using tiny lengths of inorganic, twisted glass fiber.
  • Nanostructures and physical mechanisms that produce the shining color of butterfly wings were reproduced in silico by Greg Parker, professor of Electronics and Computer Science at the University of Southampton and research student Luca Plattner in the field of photonics, which is electronics using photons as the information carrier instead of electrons.
  • The wing structure of the blue morpho butterfly was studied and the way it reflects light was mimicked to create an RFID tag that can be read through water and on metal.
  • The wing structure of butterflies has also inspired the creation of new nanosensors to detect explosives.
  • Neuromorphic chips, silicon retinae or cochleae, has wiring that is modelled after real neural networks. S.a.: connectivity.
  • Technoecosystems or 'EcoCyborg' systems involve the coupling of natural ecological processes to technological ones which mimic ecological functions. This results in the creation of a self-regulating hybrid system. Research into this field was initiated by Howard T. Odum, who perceived the structure and emergy dynamics of ecosystems as being analogous to energy flow between components of an electrical circuit.
  • Medical adhesives involving glue and tiny nano-hairs are being developed based on the physical structures found in the feet of geckos.
  • Computer viruses also show similarities with biological viruses in their way to curb program-oriented information towards self-reproduction and dissemination.
  • The cooling system of the Eastgate Centre building in Harare was modeled after a termite mound to achieve very efficient passive cooling.
  • Adhesive which allows mussels to stick to rocks, piers and boat hulls inspired bioadhesive gel for blood vessels.
  • Through the field of bionics, new aircraft designs with far greater agility and other advantages may be created. This has been described by Geoff Spedding and Anders Hedenström in an article in Journal of Experimental Biology. Similar statements were also made by John Videler and Eize Stamhuis in their book Avian Flight and in the article they present in Science about LEVs. John Videler and Eize Stamhuis have since worked out real-life improvements to airplane wings, using bionics research. This research in bionics may also be used to create more efficient helicopters or miniature UAVs. This latter was stated by Bret Tobalske in an article in Science about Hummingbirds. Bret Tobalske has thus now started work on creating these miniature UAVs which may be used for espionage. UC Berkeley as well as ESA have finally also been working in a similar direction and created the Robofly (a miniature UAV) and the Entomopter (a UAV which can walk, crawl and fly).
  • A bio-inspired mechanical device can generate plasma in water via cavitation using the morphological accurate snapping shrimp claw. This was described in detail by Xin Tang and David Staack in an article published in Science Advances.

Specific uses of the term

Induced sensorimotor brain plasticity controls pain in phantom limb patients-ncomms13209-s2

In medicine

Bionics refers to the flow of concepts from biology to engineering and vice versa. Hence, there are two slightly different points of view regarding the meaning of the word.

In medicine, bionics means the replacement or enhancement of organs or other body parts by mechanical versions. Bionic implants differ from mere prostheses by mimicking the original function very closely, or even surpassing it.

Bionics' German equivalent, Bionik, always adheres to the broader meaning, in that it tries to develop engineering solutions from biological models. This approach is motivated by the fact that biological solutions will usually be optimized by evolutionary forces.

While the technologies that make bionic implants possible are developing gradually, a few successful bionic devices exist, a well known one being the Australian-invented multi-channel cochlear implant (bionic ear), a device for deaf people. Since the bionic ear, many bionic devices have emerged and work is progressing on bionics solutions for other sensory disorders (e.g. vision and balance). Bionic research has recently provided treatments for medical problems such as neurological and psychiatric conditions, for example Parkinson's disease and epilepsy.

In 1997, the Colombian Prof. Alvaro Rios Poveda, a researcher in bionics in Latin America, developed an upper limb and hand prosthesis with sensory feedback. This technology allows amputee patients to handle prosthetic hand systems in a more natural way 

By 2004 fully functional artificial hearts were developed. Significant progress is expected with the advent of nanotechnology. A well-known example of a proposed nanodevice is a respirocyte, an artificial red cell, designed (though not built yet) by Robert Freitas.

Kwabena Boahen from Ghana was a professor in the Department of Bioengineering at the University of Pennsylvania. During his eight years at Penn, he developed a silicon retina that was able to process images in the same manner as a living retina. He confirmed the results by comparing the electrical signals from his silicon retina to the electrical signals produced by a salamander eye while the two retinas were looking at the same image.

The Nichi-In group is working on biomimicking scaffolds in tissue engineering, stem cells and regenerative medicine have given a detailed classification on biomimetics in medicine.

On 21 July 2015, the BBC's medical correspondent Fergus Walsh reported, "Surgeons in Manchester have performed the first bionic eye implant in a patient with the most common cause of sight loss in the developed world. Ray Flynn, 80, has dry age-related macular degeneration which has led to the total loss of his central vision. He is using a retinal implant which converts video images from a miniature video camera worn on his glasses. He can now make out the direction of white lines on a computer screen using the retinal implant." The implant, known as the Argus II and manufactured in the US by the company Second Sight Medical Products, had been used previously in patients who were blind as the result of the rare inherited degenerative eye disease retinitis pigmentosa.

On 17 February 2020, Darren Fuller, a military veteran became the first person to receive a bionic arm. Fuller lost the lower section of his right arm while serving term in Afghanistan during an incident that involved mortar ammunition in 2008.

Politics

A political form of biomimicry is bioregional democracy, wherein political borders conform to natural ecoregions rather than human cultures or the outcomes of prior conflicts.

Critics of these approaches often argue that ecological selection itself is a poor model of minimizing manufacturing complexity or conflict, and that the free market relies on conscious cooperation, agreement, and standards as much as on efficiency – more analogous to sexual selection. Charles Darwin himself contended that both were balanced in natural selection – although his contemporaries often avoided frank talk about sex, or any suggestion that free market success was based on persuasion, not value.

Advocates, especially in the anti-globalization movement, argue that the mating-like processes of standardization, financing and marketing, are already examples of runaway evolution – rendering a system that appeals to the consumer but which is inefficient at use of energy and raw materials. Biomimicry, they argue, is an effective strategy to restore basic efficiency.

Biomimicry is also the second principle of Natural Capitalism.

Other uses

Business biomimetics is the latest development in the application of biomimetics. Specifically it applies principles and practice from biological systems to business strategy, process, organisation design and strategic thinking. It has been successfully used by a range of industries in FMCG, defence, central government, packaging and business services. Based on the work by Phil Richardson at the University of Bath the approach was launched at the House of Lords in May 2009.

In a more specific meaning, it is a creativity technique that tries to use biological prototypes to get ideas for engineering solutions. This approach is motivated by the fact that biological organisms and their organs have been well optimized by evolution. In chemistry, a biomimetic synthesis is a chemical synthesis inspired by biochemical processes.

Another, more recent meaning of the term bionics refers to merging organism and machine. This approach results in a hybrid system combining biological and engineering parts, which can also be referred as a cybernetic organism (cyborg). Practical realization of this was demonstrated in Kevin Warwick's implant experiments bringing about ultrasound input via his own nervous system.


Animal welfare

From Wikipedia, the free encyclopedia

Animal welfare
A four-week-old puppy, found alongside a road after flooding in West Virginia, United States, is fed at an Emergency Animal Rescue Service shelter in the Twin Falls State Park.

Animal welfare is the well-being of non-human animals. Formal standards of animal welfare vary between contexts, but are debated mostly by animal welfare groups, legislators, and academics. Animal welfare science uses measures such as longevity, disease, immunosuppression, behavior, physiology, and reproduction, although there is debate about which of these best indicate animal welfare.

Respect for animal welfare is often based on the belief that nonhuman animals are sentient and that consideration should be given to their well-being or suffering, especially when they are under the care of humans. These concerns can include how animals are slaughtered for food, how they are used in scientific research, how they are kept (as pets, in zoos, farms, circuses, etc.), and how human activities affect the welfare and survival of wild species.

There are two forms of criticism of the concept of animal welfare, coming from diametrically opposite positions. One view, held by some thinkers in history, holds that humans have no duties of any kind to animals. The other view is based on the animal rights position that animals should not be regarded as property and any use of animals by humans is unacceptable. Accordingly, some animal rights proponents argue that the perception of better animal welfare facilitates continued and increased exploitation of animals. Some authorities therefore treat animal welfare and animal rights as two opposing positions. Others see animal welfare gains as incremental steps towards animal rights.

The predominant view of modern neuroscientists, notwithstanding philosophical problems with the definition of consciousness even in humans, is that consciousness exists in nonhuman animals. However, some still maintain that consciousness is a philosophical question that may never be scientifically resolved. Remarkably, a new study has managed to overcome some of the difficulties in testing this question empirically, and devised a unique way to dissociate conscious from nonconscious perception in animals.  In this study conducted in rhesus monkeys, the researchers built experiments predicting completely opposite behavioral outcomes to consciously vs. non-consciously perceived stimuli. Strikingly, the monkeys' behaviors displayed these exact opposite signatures, just like aware and unaware humans tested in the study.

History, principles and practice

Animal protection laws were enacted as early as 13th century AD by Genghis Khan in Mongolia, where they protected wildlife during breeding season (March to October).

Early legislation in the Western world on behalf of animals includes the Ireland Parliament (Thomas Wentworth) "An Act against Plowing by the Tayle, and pulling the Wooll off living Sheep", 1635, and the Massachusetts Colony (Nathaniel Ward) "Off the Bruite Creatures" Liberty 92 and 93 in the "Massachusetts Body of Liberties" of 1641.

In 1776, English clergyman Humphrey Primatt authored A Dissertation on the Duty of Mercy and Sin of Cruelty to Brute Animals, one of the first books published in support of animal welfare. Marc Bekoff has noted that "Primatt was largely responsible for bringing animal welfare to the attention of the general public."

Since 1822, when Irish MP Richard Martin brought the "Cruel Treatment of Cattle Act 1822" through Parliament offering protection from cruelty to cattle, horses, and sheep, an animal welfare movement has been active in England. Martin was among the founders of the world's first animal welfare organization, the Society for the Prevention of Cruelty to Animals, or SPCA, in 1824. In 1840, Queen Victoria gave the society her blessing, and it became the RSPCA. The society used members' donations to employ a growing network of inspectors, whose job was to identify abusers, gather evidence, and report them to the authorities.

In 1837, the German minister Albert Knapp founded the first German animal welfare society.

One of the first national laws to protect animals was the UK "Cruelty to Animals Act 1835" followed by the "Protection of Animals Act 1911". In the US it was many years until there was a national law to protect animals—the "Animal Welfare Act of 1966"—although there were a number of states that passed anti-cruelty laws between 1828 and 1898. In India, animals are protected by the "Prevention of Cruelty to Animals Act, 1960".

Significant progress in animal welfare did not take place until the late 20th century. In 1965, the UK government commissioned an investigation—led by Professor Roger Brambell—into the welfare of intensively farmed animals, partly in response to concerns raised in Ruth Harrison's 1964 book, Animal Machines. On the basis of Professor Brambell's report, the UK government set up the Farm Animal Welfare Advisory Committee in 1967, which became the Farm Animal Welfare Council in 1979. The committee's first guidelines recommended that animals require the freedoms to "stand up, lie down, turn around, groom themselves and stretch their limbs." The guidelines have since been elaborated upon to become known as the Five Freedoms.

In the UK, the "Animal Welfare Act 2006" consolidated many different forms of animal welfare legislation.

A number of animal welfare organisations are campaigning to achieve a Universal Declaration on Animal Welfare (UDAW) at the United Nations. In principle, the Universal Declaration would call on the United Nations to recognise animals as sentient beings, capable of experiencing pain and suffering, and to recognise that animal welfare is an issue of importance as part of the social development of nations worldwide. The campaign to achieve the UDAW is being co-ordinated by World Animal Protection, with a core working group including Compassion in World Farming, the RSPCA, and the Humane Society International (the international branch of HSUS).

Animal welfare science

Animal welfare science is an emerging field that seeks to answer questions raised by the keeping and use of animals, such as whether hens are frustrated when confined in cages, whether the psychological well-being of animals in laboratories can be maintained, and whether zoo animals are stressed by the transport required for international conservation. Ireland leads research into farm animal welfare with the recently published Research Report on Farm Animal Welfare.

Animal welfare issues

Farmed animals

The welfare of egg laying hens in battery cages (top) can be compared with the welfare of free range hens (middle and bottom) which are given access to the outdoors. However, animal welfare groups argue that the vast majority of free-range hens are still intensively confined (bottom) and are rarely able to go outdoors.

A major concern for the welfare of farmed animals is factory farming in which large numbers of animals are reared in confinement at high stocking densities. Issues include the limited opportunities for natural behaviors, for example, in battery cages, veal and gestation crates, instead producing abnormal behaviors such as tail-biting, cannibalism, and feather pecking, and routine invasive procedures such as beak trimming, castration, and ear notching. More extensive methods of farming, e.g. free range, can also raise welfare concerns such as the mulesing of sheep, predation of stock by wild animals, and biosecurity.

Farmed animals are artificially selected for production parameters which sometimes impinge on the animals' welfare. For example, broiler chickens are bred to be very large to produce the greatest quantity of meat per animal. Broilers bred for fast growth have a high incidence of leg deformities because the large breast muscles cause distortions of the developing legs and pelvis, and the birds cannot support their increased body weight. As a consequence, they frequently become lame or suffer from broken legs. The increased body weight also puts a strain on their hearts and lungs, and ascites often develops. In the UK alone, up to 20 million broilers each year die from the stress of catching and transport before reaching the slaughterhouse.

Another concern about the welfare of farmed animals is the method of slaughter, especially ritual slaughter. While the killing of animals need not necessarily involve suffering, the general public considers that killing an animal reduces its welfare. This leads to further concerns about premature slaughtering such as chick culling by the laying hen industry, in which males are slaughtered immediately after hatching because they are superfluous; this policy occurs in other farmed animal industries such as the production of goat and cattle milk, raising the same concerns.

Cetaceans

Captive cetaceans are kept for display, research and naval operations. To enhance their welfare, humans feed them fish which are dead, but are disease-free, protect them from predators and injury, monitor their health, and provide activities for behavioral enrichment. Some are kept in lagoons with natural soil and vegetated sides. Most are in concrete tanks which are easy to clean, but echo their natural sounds back to them. They cannot develop their own social groups, and related cetaceans are typically separated for display and breeding. Military dolphins used in naval operations swim free during operations and training, and return to pens otherwise. Captive cetaceans are trained to present themselves for blood samples, health exams and noninvasive breath samples above their blow holes. Staff can monitor the captives afterwards for signs of infection from the procedure.

Research on wild cetaceans leaves them free to roam and make sounds in their natural habitat, eat live fish, face predators and injury, and form social groups voluntarily. However boat engines of researchers, whale watchers and others add substantial noise to their natural environment, reducing their ability to echolocate and communicate. Electric engines are far quieter, but are not widely used for either research or whale watching, even for maintaining position, which does not require much power.Vancouver Port offers discounts for ships with quiet propeller and hull designs. Other areas have reduced speeds. Boat engines also have unshielded propellers, which cause serious injuries to cetaceans who come close to the propeller. The US Coast Guard has proposed rules on propeller guards to protect human swimmers, but has not adopted any rules. The US Navy uses propeller guards to protect manatees in Georgia. Ducted propellers provide more efficient drive at speeds up to 10 knots, and protect animals beneath and beside them, but need grilles to prevent injuries to animals drawn into the duct. Attaching satellite trackers and obtaining biopsies to measure pollution loads and DNA involve either capture and release, or shooting the cetaceans from a distance with dart guns. A cetacean was killed by a fungal infection after being darted, due to either an incompletely sterilized dart or an infection from the ocean entering the wound caused by the dart. Researchers on wild cetaceans have not yet been able to use drones to capture noninvasive breath samples.

Other harms to wild cetaceans include commercial whaling, aboriginal whaling, drift netting, ship collisions, water pollution, noise from sonar and reflection seismology, predators, loss of prey, disease. Efforts to enhance the life of wild cetaceans, besides reducing those harms, include offering human music. Canadian rules do not forbid playing quiet music, though they forbid "noise that may resemble whale songs or calls, under water".

Wild animal welfare

In addition to cetaceans, the welfare of other wild animals has also been studied, though to a lesser extent than that of animals in farms. Research in wild animal welfare has two focuses: the welfare of wild animals kept in captivity and the welfare of animals living in the wild. The former has addressed the situation of animals kept both for human use, as in zoos or circuses, or in rehabilitation centers. The latter has examined how the welfare of non-domesticated animals living in wild or urban areas are affected by humans or natural factors causing wild animal suffering.

Some of the proponents of these views have advocated for carrying out conservation efforts in ways that respect the welfare of wild animals, within the framework of the disciplines of compassionate conservation and conservation welfare, while others have argued in favor of improving the welfare of wild animals for the sake of the animals, regardless of whether there are any conservation issues involved at all. The welfare economist Yew-Kwang Ng, in his 1995 "Towards welfare biology: Evolutionary economics of animal consciousness and suffering", proposed welfare biology as a research field to study "living things and their environment with respect to their welfare (defined as net happiness, or enjoyment minus suffering)."

Legislation

European Union

The European Commission's activities in this area start with the recognition that animals are sentient beings. The general aim is to ensure that animals do not endure avoidable pain or suffering, and obliges the owner/keeper of animals to respect minimum welfare requirements. European Union legislation regarding farm animal welfare is regularly re-drafted according to science-based evidence and cultural views. For example, in 2009, legislation was passed which aimed to reduce animal suffering during slaughter and on 1 January 2012, the European Union Council Directive 1999/74/EC came into act, which means that conventional battery cages for laying hens are now banned across the Union.

United Kingdom

The Animal Welfare Act 2006 makes owners and keepers responsible for ensuring that the welfare needs of their animals are met. These include the need: for a suitable environment (place to live), for a suitable diet, to exhibit normal behavior patterns, to be housed with, or apart from, other animals (if applicable), and to be protected from pain, injury, suffering and disease. Anyone who is cruel to an animal, or does not provide for its welfare needs, may be banned from owning animals, fined up to £20,000 and/or sent to prison for a maximum of six months.

In the UK, the welfare of research animals being used for "regulated procedures" was historically protected by the Animals (Scientific Procedures) Act 1986 (ASPA) which is administrated by the Home Office. The Act defines "regulated procedures" as animal experiments that could potentially cause "pain, suffering, distress or lasting harm" to "protected animals". Initially, "protected animals" encompassed all living vertebrates other than humans, but, in 1993, an amendment added a single invertebrate species, the common octopus.

Primates, cats, dogs, and horses have additional protection over other vertebrates under the Act. Revised legislation came into force in January 2013. This has been expanded to protect "...all living vertebrates, other than man, and any living cephalopod. Fish and amphibia are protected once they can feed independently and cephalopods at the point when they hatch. Embryonic and foetal forms of mammals, birds and reptiles are protected during the last third of their gestation or incubation period." The definition of regulated procedures was also expanded: "A procedure is regulated if it is carried out on a protected animal and may cause that animal a level of pain, suffering, distress or lasting harm equivalent to, or higher than, that caused by inserting a hypodermic needle according to good veterinary practice." It also includes modifying the genes of a protected animal if this causes the animal pain, suffering, distress, or lasting harm. The ASPA also considers other issues such as animal sources, housing conditions, identification methods, and the humane killing of animals.

This legislation is widely regarded as the strictest in the world. Those applying for a license must explain why such research cannot be done through non-animal methods. The project must also pass an ethical review panel which aims to decide if the potential benefits outweigh any suffering for the animals involved.

United States

In the United States, a federal law called the Humane Slaughter Act was designed to decrease suffering of livestock during slaughter.

The Georgia Animal Protection Act of 1986 was a state law enacted in response to the inhumane treatment of companion animals by a pet store chain in Atlanta. The Act provided for the licensing and regulation of pet shops, stables, kennels, and animal shelters, and established, for the first time, minimum standards of care. Additional provisions, called the Humane Euthanasia Act, were added in 1990, and then further expanded and strengthened with the Animal Protection Act of 2000.

In 2002, voters passed (by a margin of 55% for and 45% against) Amendment 10 to the Florida Constitution banning the confinement of pregnant pigs in gestation crates. In 2006, Arizona voters passed Proposition 204 with 62% support; the legislation prohibits the confinement of calves in veal crates and breeding sows in gestation crates. In 2007, the Governor of Oregon signed legislation prohibiting the confinement of pigs in gestation crates and in 2008, the Governor of Colorado signed legislation that phased out both gestation crates and veal crates. Also during 2008, California passed Proposition 2, known as the "Prevention of Farm Animal Cruelty Act", which orders new space requirements for farm animals starting in 2015.

The use of animals in laboratories remains controversial. Animal welfare advocates push for enforced standards to ensure the health and safety of those animals used for tests.

In the US, every institution that uses vertebrate animals for federally funded laboratory research must have an Institutional Animal Care and Use Committee (IACUC). Each local IACUC reviews research protocols and conducts evaluations of the institution's animal care and use which includes the results of inspections of facilities that are required by law. The IACUC committee must assess the steps taken to "enhance animal well-being" before research can take place. This includes research on farm animals.

According to the National Institutes of Health Office of Laboratory Animal Welfare, researchers must try to minimize distress in animals whenever possible: "Animals used in research and testing may experience pain from induced diseases, procedures, and toxicity. The Public Health Service (PHS) Policy and Animal Welfare Regulations (AWRs) state that procedures that cause more than momentary or slight pain or distress should be performed with appropriate sedation, analgesia, or anesthesia.

However, research and testing studies sometimes involve pain that cannot be relieved with such agents because they would interfere with the scientific objectives of the study. Accordingly, federal regulations require that IACUCs determine that discomfort to animals will be limited to that which is unavoidable for the conduct of scientifically valuable research, and that unrelieved pain and distress will only continue for the duration necessary to accomplish the scientific objectives. The PHS Policy and AWRs further state that animals that would otherwise suffer severe or chronic pain and distress that cannot be relieved should be painlessly killed at the end of the procedure, or if appropriate, during the procedure."

The National Research Council's Guide for the Care and Use of Laboratory Animals also serves as a guide to improve welfare for animals used in research in the US. The Federation of Animal Science Societies' Guide for the Care and Use of Agricultural Animals in Research and Teaching is a resource addressing welfare concerns in farm animal research. Laboratory animals in the US are also protected under the Animal Welfare Act. The United States Department of Agriculture Animal and Plant Health Inspection Service (APHIS) enforces the Animal Welfare Act. APHIS inspects animal research facilities regularly and reports are published online.

According to the U.S. Department of Agriculture (USDA), the total number of animals used in the U.S. in 2005 was almost 1.2 million, but this does not include rats, mice, and birds which are not covered by welfare legislation but make up approximately 90% of research animals.

Approaches and definitions

There are many different approaches to describing and defining animal welfare.

Positive conditions – Providing good animal welfare is sometimes defined by a list of positive conditions which should be provided to the animal. This approach is taken by the Five Freedoms and the three principles of Professor John Webster.

The Five Freedoms are:

  • Freedom from thirst and hunger – by ready access to fresh water and a diet to maintain full health and vigour
  • Freedom from discomfort – by providing an appropriate environment including shelter and a comfortable resting area
  • Freedom from pain, injury, and disease – by prevention or rapid diagnosis and treatment
  • Freedom to express most normal behavior – by providing sufficient space, proper facilities, and company of the animal's own kind
  • Freedom from fear and distress – by ensuring conditions and treatment which avoid mental suffering

John Webster defines animal welfare by advocating three positive conditions: Living a natural life, being fit and healthy, and being happy.

High production – In the past, many have seen farm animal welfare chiefly in terms of whether the animal is producing well. The argument is that an animal in poor welfare would not be producing well, however, many farmed animals will remain highly productive despite being in conditions where good welfare is almost certainly compromised, e.g., layer hens in battery cages.

Emotion in animals – Others in the field, such as Professor Ian Duncan and Professor Marian Dawkins, focus more on the feelings of the animal. This approach indicates the belief that animals should be considered as sentient beings. Duncan wrote, "Animal welfare is to do with the feelings experienced by animals: the absence of strong negative feelings, usually called suffering, and (probably) the presence of positive feelings, usually called pleasure. In any assessment of welfare, it is these feelings that should be assessed." Dawkins wrote, "Let us not mince words: Animal welfare involves the subjective feelings of animals."

Welfare biologyYew-Kwang Ng defines animal welfare in terms of welfare economics: "Welfare biology is the study of living things and their environment with respect to their welfare (defined as net happiness, or enjoyment minus suffering). Despite difficulties of ascertaining and measuring welfare and relevancy to normative issues, welfare biology is a positive science."

Dictionary definition – In the Saunders Comprehensive Veterinary Dictionary, animal welfare is defined as "the avoidance of abuse and exploitation of animals by humans by maintaining appropriate standards of accommodation, feeding and general care, the prevention and treatment of disease and the assurance of freedom from harassment, and unnecessary discomfort and pain."

American Veterinary Medical Association (AVMA) has defined animal welfare as: "An animal is in a good state of welfare if (as indicated by scientific evidence) it is healthy, comfortable, well nourished, safe, able to express innate behavior, and if it is not suffering from unpleasant states such as pain, fear, and distress." They have offered the following eight principles for developing and evaluating animal welfare policies.

  • The responsible use of animals for human purposes, such as companionship, food, fiber, recreation, work, education, exhibition, and research conducted for the benefit of both humans and animals, is consistent with the Veterinarian's Oath.
  • Decisions regarding animal care, use, and welfare shall be made by balancing scientific knowledge and professional judgment with consideration of ethical and societal values.
  • Animals must be provided water, food, proper handling, health care, and an environment appropriate to their care and use, with thoughtful consideration for their species-typical biology and behavior.
  • Animals should be cared for in ways that minimize fear, pain, stress, and suffering.
  • Procedures related to animal housing, management, care, and use should be continuously evaluated, and when indicated, refined or replaced.
  • Conservation and management of animal populations should be humane, socially responsible, and scientifically prudent.
  • Animals shall be treated with respect and dignity throughout their lives and, when necessary, provided a humane death.
  • The veterinary profession shall continually strive to improve animal health and welfare through scientific research, education, collaboration, advocacy, and the development of legislation and regulations.

Terrestrial Animal Health Code of World Organisation for Animal Health defines animal welfare as "how an animal is coping with the conditions in which it lives. An animal is in a good state of welfare if (as indicated by scientific evidence) it is healthy, comfortable, well nourished, safe, able to express innate behaviour, and if it is not suffering from unpleasant states such as pain, fear, and distress. Good animal welfare requires disease prevention and veterinary treatment, appropriate shelter, management, nutrition, humane handling and humane slaughter/killing. Animal welfare refers to the state of the animal; the treatment that an animal receives is covered by other terms such as animal care, animal husbandry, and humane treatment."

Coping – Professor Donald Broom defines the welfare of an animal as "Its state as regards its attempts to cope with its environment. This state includes how much it is having to do to cope, the extent to which it is succeeding in or failing to cope, and its associated feelings." He states that "welfare will vary over a continuum from very good to very poor and studies of welfare will be most effective if a wide range of measures is used." John Webster criticized this definition for making "no attempt to say what constitutes good or bad welfare."

Attitudes

Animal welfare often refers to a utilitarian attitude towards the well-being of nonhuman animals. It believes the animals can be exploited if the animal suffering and the costs of use is less than the benefits to humans. This attitude is also known simply as welfarism.

An example of welfarist thought is Hugh Fearnley-Whittingstall's meat manifesto. Point three of eight is:

Think about the animals that the meat you eat comes from. Are you at all concerned about how they have been treated? Have they lived well? Have they been fed on safe, appropriate foods? Have they been cared for by someone who respects them and enjoys contact with them? Would you like to be sure of that? Perhaps it's time to find out a bit more about where the meat you eat comes from. Or to buy from a source that reassures you about these points.

Robert Garner describes the welfarist position as the most widely held in modern society. He states that one of the best attempts to clarify this position is given by Robert Nozick:

Consider the following (too minimal) position about the treatment of animals. So that we can easily refer to it, let us label this position "utilitarianism for animals, Kantianism for people." It says: (1) maximize the total happiness of all living beings; (2) place stringent side constraints on what one may do to human beings. Human beings may not be used or sacrificed for the benefit of others; animals may be used or sacrificed for the benefit of other people or animals only if those benefits are greater than the loss inflicted.

Welfarism is often contrasted with the animal rights and animal liberation positions, which hold that animals should not be used by humans and should not be regarded as human property. However, it has been argued that both welfarism and animal liberation only make sense if it is assumed that animals have "subjective welfare".

New welfarism

New welfarism was coined by Gary L. Francione in 1996. It is a view that the best way to prevent animal suffering is to abolish the causes of animal suffering, but advancing animal welfare is a goal to pursue in the short term. Thus, for instance, new welfarists want to phase out fur farms and animal experiments but in the short-term they try to improve conditions for the animals in these systems, so they lobby to make cages less constrictive and to reduce the numbers of animals used in laboratories.

Within the context of animal research, many scientific organisations believe that improved animal welfare will provide improved scientific outcomes. If an animal in a laboratory is suffering stress or pain it could negatively affect the results of the research.

Increased affluence in many regions for the past few decades afforded consumers the disposable income to purchase products from high welfare systems. The adaptation of more economically efficient farming systems in these regions were at the expense of animal welfare and to the financial benefit of consumers, both of which were factors in driving the demand for higher welfare for farm animals. A 2006 survey concluded that a majority (63%) of EU citizens "show some willingness to change their usual place of shopping in order to be able to purchase more animal welfare-friendly products."

The volume of scientific research on animal welfare has also increased significantly in some countries.

Criticisms

Denial of duties to animals

Some individuals in history have, at least in principle, rejected the view that humans have duties of any kind to animals.

Augustine of Hippo seemed to take such a position in his writings against those he saw as heretics: "For we see and hear by their cries that animals die with pain, although man disregards this in a beast, with which, as not having a rational soul, we have no community of rights."

Animal rights

American philosopher Tom Regan has criticized the animal welfare movement for not going far enough to protect animals' interests.

Animal rights advocates, such as Gary L. Francione and Tom Regan, argue that the animal welfare position (advocating for the betterment of the condition of animals, but without abolishing animal use) is inconsistent in logic and ethically unacceptable. However, there are some animal right groups, such as PETA, which support animal welfare measures in the short term to alleviate animal suffering until all animal use is ended.

According to PETA's Ingrid Newkirk in an interview with Wikinews, there are two issues in animal welfare and animal rights. "If I only could have one thing, it would be to end suffering", said Newkirk. "If you could take things from animals and kill animals all day long without causing them suffering, then I would take it... Everybody should be able to agree that animals should not suffer if you kill them or steal from them by taking the fur off their backs or take their eggs, whatever. But you shouldn't put them through torture to do that."

Abolitionism holds that focusing on animal welfare not only fails to challenge animal suffering, but may actually prolong it by making the exercise of property rights over animals appear less unattractive. The abolitionists' objective is to secure a moral and legal paradigm shift, whereby animals are no longer regarded as property. In recent years documentaries such as watchdominion.com have been produced, exposing the suffering occurring in animal agriculture facilities that are marketed as having high welfare standards.

Animal welfare organizations

Global

World Animal Protection was founded in 1981 to protection animals around the globe.

World Organisation for Animal Health (OIE): The intergovernmental organisation responsible for improving animal health worldwide. The OIE has been established "for the purpose of projects of international public utility relating to the control of animal diseases, including those affecting humans and the promotion of animal welfare and animal production food safety."

World Animal Protection: Protects animals across the globe. World Animal Protection's objectives include helping people understand the critical importance of good animal welfare, encouraging nations to commit to animal-friendly practices, and building the scientific case for the better treatment of animals. They are global in a sense that they have consultative status at the Council of Europe and collaborate with national governments, the United Nations, the Food and Agriculture Organization and the World Organization for Animal Health.

Non-government organizations

Canadian Council on Animal Care: The national organization responsible for overseeing the care and use of animals involved in Canadian Science.

Canadian Federation of Humane Societies (CFHS): The only national organization representing humane societies and SPCAs in Canada. They provide leadership on animal welfare issues and spread the message across Canada.

The Canadian Veterinary Medical Association: Brings in veterinary involvement to animal welfare. Their objective is to share this concern of animals with all members of the profession, with the general public, with government at all levels, and with other organizations such as the CFHS, which have similar concerns.

Compassion in World Farming: Founded over 40 years ago in 1967 by a British farmer who became horrified by the development of modern, intensive factory farming. "Today we campaign peacefully to end all cruel factory farming practices. We believe that the biggest cause of cruelty on the planet deserves a focused, specialised approach – so we only work on farm animal welfare."

The Movement for Compassionate Living: Exists to- "Promote simple vegan living and self-reliance as a remedy against the exploitation of humans, animals and the Earth. Promote the use of trees and vegan-organic farming to meet the needs of society for food and natural resources. Promote a land-based society where as much of our food and resources as possible are produced locally."

National Animal Interest Alliance: An animal welfare organization in the United States founded in 1991 promotes the welfare of animals, strengthens the human-animal bond, and safeguards the rights of responsible animal owners, enthusiasts and professionals through research, public information and sound public policy. They host an online library of information about various animal-related subjects serving as a resource for groups and individuals dedicated to responsible animal care and well-being. 

National Farm Animal Care Council: Their objectives are to facilitate collaboration among members with respect to farm animal care issues in Canada, to facilitate information sharing and communication, and to monitor trends and initiatives in both the domestic and international market place.

National Office of Animal Health: A British organisation that represents its members drawn from the animal medicines industry.

Ontario Society for the Prevention of Cruelty to Animals: A registered charity comprising over 50 communities.

Royal Society for the Prevention of Cruelty to Animals: A well-known animal welfare charity in England and Wales, founded in 1824.

Universities Federation for Animal Welfare: A UK registered charity, established in 1926, that works to develop and promote improvements in the welfare of all animals through scientific and educational activity worldwide.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...