Search This Blog

Saturday, April 28, 2018

Animal consciousness

From Wikipedia, the free encyclopedia
A grey parrot peers into the camera
According to the Cambridge Declaration on Consciousness, "near human-like levels of consciousness" have been observed in the grey parrot.[1]

Animal consciousness, or animal awareness, is the quality or state of self-awareness within an animal, or of being aware of an external object or something within itself.[2][3] In humans, consciousness has been defined as: sentience, awareness, subjectivity, qualia, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind.[4] Despite the difficulty in definition, many philosophers believe there is a broadly shared underlying intuition about what consciousness is.[5]

The topic of animal consciousness is beset with a number of difficulties. It poses the problem of other minds in an especially severe form because animals, lacking the ability to use human language, cannot tell us about their experiences.[6] Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. The 17th-century French philosopher René Descartes, for example, has sometimes been blamed for mistreatment of animals because he argued that only humans are conscious.[7]

Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. The American philosopher Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself.[8] Other thinkers, such as the cognitive scientist Douglas Hofstadter, dismiss this argument as incoherent.[9] Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence.[10]

Animal consciousness has been actively researched for over one hundred years.[11] In 1927 the American functional psychologist Harvey Carr argued that any valid measure or understanding of awareness in animals depends on "an accurate and complete knowledge of its essential conditions in man".[12] A more recent review concluded in 1985 that "the best approach is to use experiment (especially psychophysics) and observation to trace the dawning and ontogeny of self-consciousness, perception, communication, intention, beliefs, and reflection in normal human fetuses, infants, and children".[11] In 2012, a group of neuroscientists signed the Cambridge Declaration on Consciousness, which "unequivocally" asserted that "humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neural substrates."[13]

Philosophical background


René Descartes argued that only humans are conscious, and not other animals

The mind–body problem in philosophy examines the relationship between mind and matter, and in particular the relationship between consciousness and the brain. A variety of approaches have been proposed. Most are either dualist or monist. Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one kind of stuff, and that mind and matter are both aspects of it. The problem was addressed by pre-Aristotelian philosophers,[14][15] and was famously addressed by René Descartes in the 17th century, resulting in Cartesian dualism. Descartes believed that humans only, and not other animals have this non-physical mind.

The rejection of the mind–body dichotomy is found in French Structuralism, and is a position that generally characterized post-war French philosophy.[16] The absence of an empirically identifiable meeting point between the non-physical mind and its physical extension has proven problematic to dualism and many modern philosophers of mind maintain that the mind is not something separate from the body.[17] These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology, and the neurosciences.[18][19][20][21]

Epiphenomenalism

Epiphenomenalism is the theory in philosophy of mind that mental phenomena are caused by physical processes in the brain or that both are effects of a common cause, as opposed to mental phenomena driving the physical mechanics of the brain. The impression that thoughts, feelings, or sensations cause physical effects, is therefore to be understood as illusory to some extent. For example, it is not the feeling of fear that produces an increase in heart beat, both are symptomatic of a common physiological origin, possibly in response to a legitimate external threat.[22]

The history of epiphenomenalism goes back to the post-Cartesian attempt to solve the riddle of Cartesian dualism, i.e., of how mind and body could interact. La Mettrie, Leibniz and Spinoza all in their own way began this way of thinking. The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Huxley (1874).[23][24] Huxley (1874) likened mental phenomena to the whistle on a steam locomotive. However, epiphenomenalism flourished primarily as it found a niche among methodological or scientific behaviorism. In the early 1900s scientific behaviorists such as Ivan Pavlov, John B. Watson, and B. F. Skinner began the attempt to uncover laws describing the relationship between stimuli and responses, without reference to inner mental phenomena. Instead of adopting a form of eliminativism or mental fictionalism, positions that deny that inner mental phenomena exist, a behaviorist was able to adopt epiphenomenalism in order to allow for the existence of mind. However, by the 1960s, scientific behaviourism met substantial difficulties and eventually gave way to the cognitive revolution. Participants in that revolution, such as Jerry Fodor, reject epiphenomenalism and insist upon the efficacy of the mind. Fodor even speaks of "epiphobia"—fear that one is becoming an epiphenomenalist.

Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery".[25] To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious.[26][27] Karl Popper develops in the book The Self and Its Brain a similar evolutionary argument.[28]

Animal ethics

Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether animals experience pain, and veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain.[29] In his interactions with scientists and other veterinarians, Rollin was regularly asked to prove animals are conscious and provide scientifically acceptable grounds for claiming they feel pain.[29] Academic reviews of the topic are equivocal, noting that the argument that animals have at least simple conscious thoughts and feelings has strong support,[30] but some critics continue to question how reliably animal mental states can be determined.[31][32] A refereed journal Animal Sentience[33] launched in 2015 by the Institute of Science and Policy of The Humane Society of the United States is devoted to research on this and related topics.

Defining consciousness

About forty meanings attributed to the term consciousness can be identified and categorized based on functions and experiences. The prospects for reaching any single, agreed-upon, theory-independent definition of consciousness appear remote.[34]
Consciousness is an elusive concept that presents many difficulties when attempts are made to define it.[35][36] Its study has progressively become an interdisciplinary challenge for numerous researchers, including ethologists, neurologists, cognitive neuroscientists, philosophers, psychologists and psychiatrists.[37][38]

In 1976 Richard Dawkins wrote, "The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology".[39] In 2004, eight neuroscientists felt it was still too soon for a definition. They wrote an apology in "Human Brain Function":[40]
"We have no idea how consciousness emerges from the physical activity of the brain and we do not know whether consciousness can emerge from non-biological systems, such as computers... At this point the reader will expect to find a careful and precise definition of consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. Currently we all use the term consciousness in many different and often ambiguous ways. Precise definitions of different aspects of consciousness will emerge ... but to make precise definitions at this stage is premature."
Consciousness is sometimes defined as the quality or state of being aware of an external object or something within oneself.[3][41] It has been defined somewhat vaguely as: subjectivity, awareness, sentience, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind.[4] Despite the difficulty in definition, many philosophers believe that there is a broadly shared underlying intuition about what consciousness is.[5] Max Velmans and Susan Schneider wrote in The Blackwell Companion to Consciousness: "Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives."[42]

Related terms, also often used in vague or ambiguous ways, are:
  • Awareness: the state or ability to perceive, to feel, or to be conscious of events, objects, or sensory patterns. In this level of consciousness, sense data can be confirmed by an observer without necessarily implying understanding. More broadly, it is the state or quality of being aware of something. In biological psychology, awareness is defined as a human's or an animal's perception and cognitive reaction to a condition or event.
  • Self-awareness: the capacity for introspection and the ability to reconcile oneself as an individual separate from the environment and other individuals.
  • Self-consciousness: an acute sense of self-awareness. It is a preoccupation with oneself, as opposed to the philosophical state of self-awareness, which is the awareness that one exists as an individual being; although some writers use both terms interchangeably or synonymously.[43]
  • Sentience: the ability to be aware (feel, perceive, or be conscious) of one's surroundings or to have subjective experiences. Sentience is a minimalistic way of defining consciousness, which is otherwise commonly used to collectively describe sentience plus other characteristics of the mind.
  • Sapience: often defined as wisdom, or the ability of an organism or entity to act with appropriate judgment, a mental faculty which is a component of intelligence or alternatively may be considered an additional faculty, apart from intelligence, with its own properties.
  • Qualia: individual instances of subjective, conscious experience.
Sentience (the ability to feel, perceive, or to experience subjectivity) is not the same as self-awareness (being aware of oneself as an individual). The mirror test is sometimes considered to be an operational test for self-awareness, and the handful of animals that have passed it are often considered to be self-aware.[44][45] It remains debatable whether recognition of one's mirror image can be properly construed to imply full self-awareness,[46] particularly given that robots are being constructed which appear to pass the test.[47][48]

Much has been learned in neuroscience about correlations between brain activity and subjective, conscious experiences, and many suggest that neuroscience will ultimately explain consciousness; "...consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells...".[49] However, this view has been criticized because consciousness has yet to be shown to be a process,[50] and the so-called "hard problem" of relating consciousness directly to brain activity remains elusive.[51]

Scientific approaches

Since Descartes's proposal of dualism, it became a general consensus that the mind had become a matter of philosophy and that science was not able to penetrate the issue of consciousness - that consciousness was outside of space and time. However, over the last 20 years, many scholars have begun to move toward a science of consciousness. Antonio Damasio and Gerald Edelman are two neuroscientists who have led the move to neural correlates of the self and of consciousness. Damasio has demonstrated that emotions and their biological foundation play a critical role in high level cognition,[52][53] and Edelman has created a framework for analyzing consciousness through a scientific outlook. The current problem consciousness researchers face involves explaining how and why consciousness arises from neural computation.[54][55] In his research on this problem, Edelman has developed a theory of consciousness, in which he has coined the terms primary consciousness and secondary consciousness.[56][57]

Eugene Linden, author of The Parrot's Lament suggests there are many examples of animal behavior and intelligence that surpass what people would suppose to be the boundary of animal consciousness. Linden contends that in many of these documented examples, a variety of animal species exhibit behavior that can only be attributed to emotion, and to a level of consciousness that we would normally ascribe only to our own species.[58]

Philosopher Daniel Dennett counters that:
Consciousness requires a certain kind of informational organization that does not seem to be 'hard-wired' in humans, but is instilled by human culture. Moreover, consciousness is not a black-or-white, all-or-nothing type of phenomenon, as is often assumed. The differences between humans and other species are so great that speculations about animal consciousness seem ungrounded. Many authors simply assume that an animal like a bat has a point of view, but there seems to be little interest in exploring the details involved.[59]
Consciousness in mammals (including humans) is an aspect of the mind generally thought to comprise qualities such as subjectivity, sentience, and the ability to perceive the relationship between oneself and one's environment. It is a subject of much research in philosophy of mind, psychology, neuroscience, and cognitive science. Some philosophers divide consciousness into phenomenal consciousness, which is subjective experience itself, and access consciousness, which refers to the global availability of information to processing systems in the brain.[60] Phenomenal consciousness has many different experienced qualities, often referred to as qualia. Phenomenal consciousness is usually consciousness of something or about something, a property known as intentionality in philosophy of mind.[60]

In humans, there are three common methods of studying consciousness, i.e. verbal report, behavioural demonstrations, and neural correlation with conscious activity. Unfortunately these can only be generalized to non-human taxa with varying degrees of difficulty.[61]

Mirror test


Elephants can recognize themselves in a mirror.[62]

The sense in which animals (or human infants) can be said to have consciousness or a self-concept has been hotly debated; it is often referred to as the debate over animal minds. The best known research technique in this area is the mirror test devised by Gordon G. Gallup, in which the skin of an animal (or human infant) is marked, while it is asleep or sedated, with a mark that cannot be seen directly but is visible in a mirror. The animal is then allowed to see its reflection in a mirror; if the animal spontaneously directs grooming behaviour towards the mark, that is taken as an indication that it is aware of itself.[63][64] Over the past 30 years, many studies have found evidence that animals recognise themselves in mirrors. Self-awareness by this criterion has been reported for:
Apes
Other land mammals
Cetaceans
Birds
Until recently it was thought that self-recognition was absent from animals without a neocortex, and was restricted to mammals with large brains and well developed social cognition. However, in 2008 a study of self-recognition in corvids reported significant results for magpies. Mammals and birds inherited the same brain components from their last common ancestor nearly 300 million years ago, and have since independently evolved and formed significantly different brain types. The results of the mirror and mark tests showed that neocortex-less magpies are capable of understanding that a mirror image belongs to their own body. The findings show that magpies respond in the mirror and mark test in a manner similar to apes, dolphins and elephants. This is a remarkable capability that, although not fully concrete in its determination of self-recognition, is at least a prerequisite of self-recognition. This is not only of interest regarding the convergent evolution of social intelligence; it is also valuable for an understanding of the general principles that govern cognitive evolution and their underlying neural mechanisms. The magpies were chosen to study based on their empathy/lifestyle, a possible precursor for their ability of self-awareness.[64] However even in the chimpanzee, the species most studied and with the most convincing findings, clear-cut evidence of self-recognition is not obtained in all individuals tested. Occurrence is about 75% in young adults and considerably less in young and old individuals.[72] For monkeys, non-primate mammals, and in a number of bird species, exploration of the mirror and social displays were observed. However, hints at mirror-induced self-directed behavior have been obtained.[73]

The mirror test has attracted controversy among some researchers because it is entirely focused on vision, the primary sense in humans, while other species rely more heavily on other senses such as the olfactory sense in dogs.[74][75][76] A study in 2015 showed that the sniff test of self-recognition (STSR)” provides evidence of self-awareness in dogs.[76]

Language

Another approach to determine whether a non-human animal is conscious derives from passive speech research with a macaw (see Arielle). Some researchers propose that by passively listening to an animal's voluntary speech, it is possible to learn about the thoughts of another creature and to determine that the speaker is conscious. This type of research was originally used to investigate a child's crib speech by Weir (1962) and in investigations of early speech in children by Greenfield and others (1976).

Zipf's law might be able to be used to indicate if a given dataset of animal communication indicate an intelligent natural language. Some researchers have used this algorithm to study bottlenose dolphin language.[77]

Pain or suffering

Further arguments revolve around the ability of animals to feel pain or suffering. Suffering implies consciousness. If animals can be shown to suffer in a way similar or identical to humans, many of the arguments against human suffering could then, presumably, be extended to animals. Others have argued that pain can be demonstrated by adverse reactions to negative stimuli that are non-purposeful or even maladaptive.[78] One such reaction is transmarginal inhibition, a phenomenon observed in humans and some animals akin to mental breakdown.

Carl Sagan, the American cosmologist, points to reasons why humans have had a tendency to deny animals can suffer:
Humans – who enslave, castrate, experiment on, and fillet other animals – have had an understandable penchant for pretending animals do not feel pain. A sharp distinction between humans and 'animals' is essential if we are to bend them to our will, make them work for us, wear them, eat them – without any disquieting tinges of guilt or regret. It is unseemly of us, who often behave so unfeelingly toward other animals, to contend that only humans can suffer. The behavior of other animals renders such pretensions specious. They are just too much like us.[79]
John Webster, a professor of animal husbandry at Bristol, argues:
People have assumed that intelligence is linked to the ability to suffer and that because animals have smaller brains they suffer less than humans. That is a pathetic piece of logic, sentient animals have the capacity to experience pleasure and are motivated to seek it, you only have to watch how cows and lambs both seek and enjoy pleasure when they lie with their heads raised to the sun on a perfect English summer's day. Just like humans.[80]
However, there is no agreement where the line between organisms that can feel pain and those that cannot should be drawn. Justin Leiber, a philosophy professor at Oxford University writes that:
Montaigne is ecumenical in this respect, claiming consciousness for spiders and ants, and even writing of our duties to trees and plants. Singer and Clarke agree in denying consciousness to sponges. Singer locates the distinction somewhere between the shrimp and the oyster. He, with rather considerable convenience for one who is thundering hard accusations at others, slides by the case of insects and spiders and bacteria, they pace Montaigne, apparently and rather conveniently do not feel pain. The intrepid Midgley, on the other hand, seems willing to speculate about the subjective experience of tapeworms ...Nagel ... appears to draw the line at flounders and wasps, though more recently he speaks of the inner life of cockroaches.[81]
There are also some who reject the argument entirely, arguing that although suffering animals feel anguish, a suffering plant also struggles to stay alive (albeit in a less visible way). In fact, no living organism 'wants' to die for another organism's sustenance. In an article written for the New York Times, Carol Kaesuk Yoon argues that:
When a plant is wounded, its body immediately kicks into protection mode. It releases a bouquet of volatile chemicals, which in some cases have been shown to induce neighboring plants to pre-emptively step up their own chemical defenses and in other cases to lure in predators of the beasts that may be causing the damage to the plants. Inside the plant, repair systems are engaged and defenses are mounted, the molecular details of which scientists are still working out, but which involve signaling molecules coursing through the body to rally the cellular troops, even the enlisting of the genome itself, which begins churning out defense-related proteins ... If you think about it, though, why would we expect any organism to lie down and die for our dinner? Organisms have evolved to do everything in their power to avoid being extinguished. How long would any lineage be likely to last if its members effectively didn't care if you killed them? [82]

Cognitive bias and emotion


Is the glass half empty or half full?

Cognitive bias in animals is a pattern of deviation in judgment, whereby inferences about other animals and situations may be drawn in an illogical fashion.[83] Individuals create their own "subjective social reality" from their perception of the input.[84] It refers to the question "Is the glass half empty or half full?", used as an indicator of optimism or pessimism. Cognitive biases have been shown in a wide range of species including rats, dogs, rhesus macaques, sheep, chicks, starlings and honeybees.[85][86][87]

The neuroscientist Joseph LeDoux advocates avoiding terms derived from human subjective experience when discussing brain functions in animals.[88] For example, the common practice of calling brain circuits that detect and respond to threats "fear circuits" implies that these circuits are responsible for feelings of fear. LeDoux argues that Pavlovian fear conditioning should be renamed Pavlovian threat conditioning to avoid the implication that "fear" is being acquired in rats or humans.[89] Key to his theoretical change is the notion of survival functions mediated by survival circuits, the purpose of which is to keep organisms alive rather than to make emotions. For example, defensive survival circuits exist to detect and respond to threats. While all organisms can do this, only organisms that can be conscious of their own brain’s activities can feel fear. Fear is a conscious experience and occurs the same way as any other kind of conscious experience: via cortical circuits that allow attention to certain forms of brain activity. LeDoux argues the only differences between an emotional and non-emotion state of consciousness are the underlying neural ingredients that contribute to the state.[90][91]

Neuroscience


Drawing by Santiago Ramón y Cajal (1899) of neurons in the pigeon cerebellum

Neuroscience is the scientific study of the nervous system.[92] It is a highly active interdisciplinary science that collaborates with many other fields. The scope of neuroscience has broadened recently to include molecular, cellular, developmental, structural, functional, evolutionary, computational, and medical aspects of the nervous system. Theoretical studies of neural networks are being complemented with techniques for imaging sensory and motor tasks in the brain. According to a 2008 paper, neuroscience explanations of psychological phenomena currently have a "seductive allure", and "seem to generate more public interest" than explanations which do not contain neuroscientific information.[93] They found that subjects who were not neuroscience experts "judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without.[93]

Neural correlates

The neural correlates of consciousness constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept.[94] Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena.[95] The set should be minimal because, if the brain is sufficient to give rise to any given conscious experience, the question is which of its components is necessary to produce it.

Visual sense and representation was reviewed in 1998 by Francis Crick and Christof Koch. They concluded sensory neuroscience can be used as a bottom-up approach to studying consciousness, and suggested experiments to test various hypotheses in this research stream.[96]

A feature that distinguishes humans from most animals is that we are not born with an extensive repertoire of behavioral programs that would enable us to survive on our own ("physiological prematurity"). To compensate for this, we have an unmatched ability to learn, i.e., to consciously acquire such programs by imitation or exploration. Once consciously acquired and sufficiently exercised, these programs can become automated to the extent that their execution happens beyond the realms of our awareness. Take, as an example, the incredible fine motor skills exerted in playing a Beethoven piano sonata or the sensorimotor coordination required to ride a motorcycle along a curvy mountain road. Such complex behaviors are possible only because a sufficient number of the subprograms involved can be executed with minimal or even suspended conscious control. In fact, the conscious system may actually interfere somewhat with these automated programs.[97]

The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools depends on the simultaneous development of appropriate behavioural assays and model organisms amenable to large-scale genomic analysis and manipulation.[98] A combination of such fine-grained neuronal analysis in animals with ever more sensitive psychophysical and brain imaging techniques in humans, complemented by the development of a robust theoretical predictive framework, will hopefully lead to a rational understanding of consciousness.

Neocortex


Previously researchers had thought that patterns of neural sleep exhibited by zebra finches needed a mammalian neocortex[1]

The neocortex is a part of the brain of mammals. It consists of the grey matter, or neuronal cell bodies and unmyelinated fibers, surrounding the deeper white matter (myelinated axons) in the cerebrum. The neocortex is smooth in rodents and other small mammals, whereas in primates and other larger mammals it has deep grooves and wrinkles. These folds increase the surface area of the neocortex considerably without taking up too much more volume. Also, neurons within the same wrinkle have more opportunity for connectivity, while neurons in different wrinkles have less opportunity for connectivity, leading to compartmentalization of the cortex. The neocortex is divided into frontal, parietal, occipital, and temporal lobes, which perform different functions. For example, the occipital lobe contains the primary visual cortex, and the temporal lobe contains the primary auditory cortex. Further subdivisions or areas of neocortex are responsible for more specific cognitive processes. The neocortex is the newest part of the cerebral cortex to evolve (hence the prefix "neo"); the other parts of the cerebral cortex are the paleocortex and archicortex, collectively known as the allocortex. In humans, 90% of the cerebral cortex is neocortex.

Researchers have argued that consciousness arises in the neocortex, and therefore cannot arise in animals which lack a neocortex. For example, Rose argued in 2002 that the "fishes have nervous systems that mediate effective escape and avoidance responses to noxious stimuli, but, these responses must occur without a concurrent, human-like awareness of pain, suffering or distress, which depend on separately evolved neocortex."[99] Recently that view has been challenged, and many researchers now believe that animal consciousness can arise from homologous subcortical brain networks.[1]

Attention

Attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things. Attention has also been referred to as the allocation of processing resources.[100] Attention also has variations amongst cultures. Voluntary attention develops in specific cultural and institutional contexts through engagement in cultural activities with more competent community members.[101]

Most experiments show that one neural correlate of attention is enhanced firing. If a neuron has a certain response to a stimulus when the animal is not attending to the stimulus, then when the animal does attend to the stimulus, the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same. In many cases attention produces changes in the EEG. Many animals, including humans, produce gamma waves (40–60 Hz) when focusing attention on a particular object or activity.[102]

Extended consciousness

Extended consciousness is an animal's autobiographical self-perception. It is thought to arise in the brains of animals which have a substantial capacity for memory and reason. It does not necessarily require language. The perception of a historic and future self arises from a stream of information from the immediate environment and from neural structures related to memory. The concept was popularised by Antonio Damasio and is used in biological psychology. Extended consciousness is said to arise in structures in the human brain described as image spaces and dispositional spaces. Image spaces imply areas where sensory impressions of all types are processed, including the focused awareness of the core consciousness. Dispositional spaces include convergence zones, which are networks in the brain where memories are processed and recalled, and where knowledge is merged with immediate experience.[103][104]

Metacognition

Metacognition is defined as "cognition about cognition", or "knowing about knowing."[105] It can take many forms; it includes knowledge about when and how to use particular strategies for learning or for problem solving.[105] It has been suggested that metacognition in some animals provides evidence for cognitive self-awareness.[106] There are generally two components of metacognition: knowledge about cognition, and regulation of cognition.[107] Writings on metacognition can be traced back at least as far as De Anima and the Parva Naturalia of the Greek philosopher Aristotle.[108] Metacognologists believe that the ability to consciously think about thinking is unique to sapient species and indeed is one of the definitions of sapience.[citation needed] There is evidence that rhesus monkeys and apes can make accurate judgments about the strengths of their memories of fact and monitor their own uncertainty,[109] while attempts to demonstrate metacognition in birds have been inconclusive.[110] A 2007 study provided some evidence for metacognition in rats,[111][112][113] but further analysis suggested that they may have been following simple operant conditioning principles,[114] or a behavioral economic model.[115]

Mirror neurons

Mirror neurons are neurons that fire both when an animal acts and when the animal observes the same action performed by another.[116][117][118] Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primate and other species including birds. The function of the mirror system is a subject of much speculation. Many researchers in cognitive neuroscience and cognitive psychology consider that this system provides the physiological mechanism for the perception action coupling (see the common coding theory).[118] They argue that mirror neurons may be important for understanding the actions of other people, and for learning new skills by imitation. Some researchers also speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills,[119][120] while others relate mirror neurons to language abilities.[121] Neuroscientists such as Marco Iacoboni (UCLA) have argued that mirror neuron systems in the human brain help us understand the actions and intentions of other people. In a study published in March 2005 Iacoboni and his colleagues reported that mirror neurons could discern if another person who was picking up a cup of tea planned to drink from it or clear it from the table. In addition, Iacoboni and a number of other researchers have argued that mirror neurons are the neural basis of the human capacity for emotions such as empathy.[118][122] Vilayanur S. Ramachandran has speculated that mirror neurons may provide the neurological basis of self-awareness.[123][124]

Evolutionary psychology

Consciousness is likely an evolved adaptation since it meets George Williams' criteria of species universality, complexity,[125] and functionality, and it is a trait that apparently increases fitness.[126] Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has survival value. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles.[127] Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness.[10] Each of these scenarios raises the question of the possible survival value of consciousness.

In his paper "Evolution of consciousness," John Eccles argues that special anatomical and physical adaptations of the mammalian cerebral cortex gave rise to consciousness.[128] In contrast, others have argued that the recursive circuitry underwriting consciousness is much more primitive, having evolved initially in pre-mammalian species because it improves the capacity for interaction with both social and natural environments by providing an energy-saving "neutral" gear in an otherwise energy-expensive motor output machine.[129] Once in place, this recursive circuitry may well have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms, as outlined by Bernard J. Baars.[130] Richard Dawkins suggested that humans evolved consciousness in order to make themselves the subjects of thought.[131] Daniel Povinelli suggests that large, tree-climbing apes evolved consciousness to take into account one's own mass when moving safely among tree branches.[131] Consistent with this hypothesis, Gordon Gallup found that chimps and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests.[131]

The concept of consciousness can refer to voluntary action, awareness, or wakefulness. However, even voluntary behaviour involves unconscious mechanisms. Many cognitive processes take place in the cognitive unconscious, unavailable to conscious awareness. Some behaviours are conscious when learned but then become unconscious, seemingly automatic. Learning, especially implicitly learning a skill, can take place outside of consciousness. For example, plenty of people know how to turn right when they ride a bike, but very few can accurately explain how they actually do so.[131]

Neural Darwinism

Neural Darwinism is a large scale theory of brain function initially proposed in 1978 by the American biologist Gerald Edelman.[132] Edelman distinguishes between what he calls primary and secondary consciousness:
  • Primary consciousness: is the ability, found in humans and some animals, to integrate observed events with memory to create an awareness of the present and immediate past of the world around them. This form of consciousness is also sometimes called "sensory consciousness". Put another way, primary consciousness is the presence of various subjective sensory contents of consciousness such as sensations, perceptions, and mental images. For example, primary consciousness includes a person's experience of the blueness of the ocean, a bird's song, and the feeling of pain. Thus, primary consciousness refers to being mentally aware of things in the world in the present without any sense of past and future; it is composed of mental images bound to a time around the measurable present.[133]
  • Secondary consciousness: is an individual's accessibility to their history and plans. The concept is also loosely and commonly associated with having awareness of one's own consciousness. The ability allows its possessors to go beyond the limits of the remembered present of primary consciousness.[56]
Primary consciousness can be defined as simple awareness that includes perception and emotion. As such, it is ascribed to most animals. By contrast, secondary consciousness depends on and includes such features as self-reflective awareness, abstract thinking, volition and metacognition.[56][134]

Edelman's theory focuses on two nervous system organizations: the brainstem and limbic systems on one side and the thalamus and cerebral cortex on the other side. The brain stem and limbic system take care of essential body functioning and survival, while the thalamocortical system receives signals from sensory receptors and sends out signals to voluntary muscles such as those of the arms and legs. The theory asserts that the connection of these two systems during evolution helped animals learn adaptive behaviors.[133]

Other scientists have argued against Edelman's theory, instead suggesting that primary consciousness might have emerged with the basic vegetative systems of the brain. That is, the evolutionary origin might have come from sensations and primal emotions arising from sensors and receptors, both internal and surface, signaling that the well-being of the creature was immediately threatened—for example, hunger for air, thirst, hunger, pain, and extreme temperature change. This is based on neurological data showing the thalamic, hippocampal, orbitofrontal, insula, and midbrain sites are the key to consciousness of thirst.[135] These scientists also point out that the cortex might not be as important to primary consciousness as some neuroscientists have believed.[135] Evidence of this lies in the fact that studies show that systematically disabling parts of the cortex in animals does not remove consciousness. Another study found that children born without a cortex are conscious. Instead of cortical mechanisms, these scientists emphasize brainstem mechanisms as essential to consciousness.[135] Still, these scientists concede that higher order consciousness does involve the cortex and complex communication between different areas of the brain.

While animals with primary consciousness have long-term memory, they lack explicit narrative, and, at best, can only deal with the immediate scene in the remembered present. While they still have an advantage over animals lacking such ability, evolution has brought forth a growing complexity in consciousness, particularly in mammals. Animals with this complexity are said to have secondary consciousness. Secondary consciousness is seen in animals with semantic capabilities, such as the four great apes. It is present in its richest form in the human species, which is unique in possessing complex language made up of syntax and semantics. In considering how the neural mechanisms underlying primary consciousness arose and were maintained during evolution, it is proposed that at some time around the divergence of reptiles into mammals and then into birds, the embryological development of large numbers of new reciprocal connections allowed rich re-entrant activity to take place between the more posterior brain systems carrying out perceptual categorization and the more frontally located systems responsible for value-category memory.[56] The ability of an animal to relate a present complex scene to its own previous history of learning conferred an adaptive evolutionary advantage. At much later evolutionary epochs, further re-entrant circuits appeared that linked semantic and linguistic performance to categorical and conceptual memory systems. This development enabled the emergence of secondary consciousness.[136][137]

Ursula Voss of the Universität Bonn believes that the theory of protoconsciousness[138] may serve as adequate explanation for self-recognition found in birds, as they would develop secondary consciousness during REM sleep.[139] She added that many types of birds have very sophisticated language systems. Don Kuiken of the University of Alberta finds such research interesting as well as if we continue to study consciousness with animal models (with differing types of consciousness), we would be able to separate the different forms of reflectiveness found in today's world.[140]

For the advocates of the idea of a secondary consciousness, self-recognition serves as a critical component and a key defining measure. What is most interesting then, is the evolutionary appeal that arises with the concept of self-recognition. In non-human species and in children, the mirror test (see above) has been used as an indicator of self-awareness.

Cambridge Declaration on Consciousness

Cambridge Declaration on Consciousness
The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.[141]
In 2012, a group of neuroscientists attending a conference on "Consciousness in Human and non-Human Animals" at the University of Cambridge in the UK, signed The Cambridge Declaration on Consciousness (see box on the right).[1][142]

In the accompanying text they "unequivocally" asserted:[1]
  • "The field of Consciousness research is rapidly evolving. Abundant new techniques and strategies for human and non-human animal research have been developed. Consequently, more data is becoming readily available, and this calls for a periodic reevaluation of previously held preconceptions in this field. Studies of non-human animals have shown that homologous brain circuits correlated with conscious experience and perception can be selectively facilitated and disrupted to assess whether they are in fact necessary for those experiences. Moreover, in humans, new non-invasive techniques are readily available to survey the correlates of consciousness."[1]
  • "The neural substrates of emotions do not appear to be confined to cortical structures. In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals. Artificial arousal of the same brain regions generates corresponding behavior and feeling states in both humans and non-human animals. Wherever in the brain one evokes instinctual emotional behaviors in non-human animals, many of the ensuing behaviors are consistent with experienced feeling states, including those internal states that are rewarding and punishing. Deep brain stimulation of these systems in humans can also generate similar affective states. Systems associated with affect are concentrated in subcortical regions where neural homologies abound. Young human and non-human animals without neocortices retain these brain-mind functions. Furthermore, neural circuits supporting behavioral/electrophysiological states of attentiveness, sleep and decision making appear to have arisen in evolution as early as the invertebrate radiation, being evident in insects and cephalopod mollusks (e.g., octopus)."[1]
  • "Birds appear to offer, in their behavior, neurophysiology, and neuroanatomy a striking case of parallel evolution of consciousness. Evidence of near human-like levels of consciousness has been most dramatically observed in grey parrots. Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought. Moreover, certain species of birds have been found to exhibit neural sleep patterns similar to those of mammals, including REM sleep and, as was demonstrated in zebra finches, neurophysiological patterns previously thought to require a mammalian neocortex. Magpies in particular have been shown to exhibit striking similarities to humans, great apes, dolphins, and elephants in studies of mirror self-recognition."[1]
  • "In humans, the effect of certain hallucinogens appears to be associated with a disruption in cortical feedforward and feedback processing. Pharmacological interventions in non-human animals with compounds known to affect conscious behavior in humans can lead to similar perturbations in behavior in non-human animals. In humans, there is evidence to suggest that awareness is correlated with cortical activity, which does not exclude possible contributions by subcortical or early cortical processing, as in visual awareness. Evidence that human and non-human animal emotional feelings arise from homologous subcortical brain networks provide compelling evidence for evolutionarily shared primal affective qualia."[1]

Examples

A common image is the scala naturae, the ladder of nature on which animals of different species occupy successively higher rungs, with humans typically at the top.[143] A more useful approach has been to recognize that different animals may have different kinds of cognitive processes, which are better understood in terms of the ways in which they are cognitively adapted to their different ecological niches, than by positing any kind of hierarchy.[144][145]

Mammals

Dogs

Dogs were previously listed as non-self-aware animals. Because, traditionally, self-consciousness was evaluated via the mirror test, scientists can confirm that an animal species possess some sense of self. But dogs, as many other animals, are not visually oriented. In 2015 a study showed that the “sniff test of self-recognition (STSR)” provides significant evidences of self-awareness in dogs, and can play a crucial role in showing that this capacity is not a specific feature of only great apes, humans and a few other animals, but it depends on the way in which researchers try to verify it.[76] According to the biologist Roberto Cazzolla Gatti, "the innovative approach to test the self-awareness with a smell test highlights the need to shift the paradigm of the anthropocentric idea of consciousness to a species-specific perspective".[146]

Birds

Grey Parrots

Research with captive grey parrots, especially Irene Pepperberg's work with an individual named Alex, has demonstrated they possess the ability to associate simple human words with meanings, and to intelligently apply the abstract concepts of shape, colour, number, zero-sense, etc. According to Pepperberg and other scientists, they perform many cognitive tasks at the level of dolphins, chimpanzees, and even human toddlers.[147] Another notable African grey is N'kisi, which in 2004 was said to have a vocabulary of over 950 words which she used in creative ways.[148] For example, when Jane Goodall visited N'kisi in his New York home, he greeted her with "Got a chimp?" because he had seen pictures of her with chimpanzees in Africa.[149]

In 2011, research led by Dalila Bovet of Paris West University Nanterre La Défense, demonstrated grey parrots were able to coordinate and collaborate with each other to an extent. They were able to solve problems such as two birds having to pull strings at the same time to obtain food. In another example, one bird stood on a perch to release a food-laden tray, while the other pulled the tray out from the test apparatus. Both would then feed. The birds were observed waiting for their partners to perform the necessary actions so their behaviour could be synchronized. The parrots appeared to express individual preferences as to which of the other test birds they would work with.[150]

Corvids



It was recently thought that self-recognition was restricted to mammals with large brains and highly evolved social cognition, but absent from animals without a neocortex. However, in 2008, an investigation of self-recognition in corvids was conducted revealing the ability of self-recognition in the magpie. Mammals and birds inherited the same brain components from their last common ancestor nearly 300 million years ago, and have since independently evolved and formed significantly different brain types. The results of the mirror test showed that although magpies do not have a neocortex, they are capable of understanding that a mirror image belongs to their own body. The findings show that magpies respond in the mirror test in a manner similar to apes, dolphins, killer whales, pigs and elephants. This is a remarkable capability that, although not fully concrete in its determination of self-recognition, is at least a prerequisite of self-recognition. This is not only of interest regarding the convergent evolution of social intelligence, it is also valuable for an understanding of the general principles that govern cognitive evolution and their underlying neural mechanisms. The magpies were chosen to study based on their empathy/lifestyle, a possible precursor for their ability of self-awareness.[64]

Invertebrates


Octopus travelling with shells collected for protection

Octopuses are highly intelligent, possibly more so than any other order of invertebrates. The level of their intelligence and learning capability are debated,[151][152][153][154] but maze and problem-solving studies show they have both short- and long-term memory. Octopus have a highly complex nervous system, only part of which is localized in their brain. Two-thirds of an octopus' neurons are found in the nerve cords of their arms. Octopus arms show a variety of complex reflex actions that persist even when they have no input from the brain.[155] Unlike vertebrates, the complex motor skills of octopuses are not organized in their brain using an internal somatotopic map of their body, instead using a non-somatotopic system unique to large-brained invertebrates.[156] Some octopuses, such as the mimic octopus, move their arms in ways that emulate the shape and movements of other sea creatures.

In laboratory studies, octopuses can easily be trained to distinguish between different shapes and patterns. They reportedly use observational learning,[157] although the validity of these findings is contested.[151][152] Octopuses have also been observed to play: repeatedly releasing bottles or toys into a circular current in their aquariums and then catching them.[158] Octopuses often escape from their aquarium and sometimes enter others. They have boarded fishing boats and opened holds to eat crabs.[153] At least four specimens of the veined octopus (Amphioctopus marginatus) have been witnessed retrieving discarded coconut shells, manipulating them, and then reassembling them to use as shelter.[159][160]

Friday, April 27, 2018

Top-down and bottom-up design

From Wikipedia, the free encyclopedia
Top-down and bottom-up are both strategies of information processing and knowledge ordering, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice, they can be seen as a style of thinking, teaching, or leadership.

A top-down approach (also known as stepwise design and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of "black boxes", which makes it easier to manipulate. However, black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. Top down approach starts with the big picture. It breaks down from there into smaller segments.[1]

A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems sub-systems of the emergent system. Bottom-up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. However, "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.

Product design and development

During the design and development of new products, designers and engineers rely on both a bottom-up and top-down approach. The bottom-up approach is being utilized when off-the-shelf or existing components are selected and integrated into the product. An example would include selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it would fit properly in the receiving components.[2] For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment, etc.), such as a space-suit, a more top-down approach is taken and almost everything is custom designed. However, when it's more important to minimize cost and increase component availability, such as with manufacturing equipment, a more bottom-up approach would be taken, and as many off-the-shelf components (bolts, gears, bearings, etc.) would be selected as possible. In the latter case, the receiving housings would be designed around the selected components.

Computer science

Software development

In the software development process, the top-down and bottom-up approaches play a key role.

Top-down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top-down approaches are implemented by attaching the stubs in place of the module. This, however, delays testing of the ultimate functional units of a system until significant design is complete. Bottom-up emphasizes coding and early testing, which can begin as soon as the first module has been specified. This approach, however, runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of the bottom-up approach.[3]

Top-down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top-down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top-down programming was not strictly what he promoted. Top-down methods were favored in software engineering until the late 1980s,[3] and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be utilized.

Modern software design approaches usually combine both top-down and bottom-up approaches. Although an understanding of the complete system is usually considered necessary for good design, leading theoretically to a top-down approach, most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom-up flavor. Some design approaches also use an approach where a partially functional system is designed and coded to completion, and this system is then expanded to fulfill all the requirements for the project

Programming


Building blocks are an example of bottom-up design because the parts are first created and then assembled without regard to how the parts will work in the assembly.

Top-down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized sub-routines eventually will perform actions so simple they can be easily and concisely coded. When all the various sub-routines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower level work can be self-contained. By defining how the lower level abstractions are expected to integrate into higher level ones, interfaces become clearly defined.

In a bottom-up approach, the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with LEGO. Engineers call this piece part design.

In a bottom-up approach, good intuition is necessary to decide the functionality that is to be provided by the module. If a system is to be built from an existing system, this approach is more suitable as it starts from some existing modules.

Parsing

Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler.

Bottom-up parsing is a strategy for analyzing unknown data relationships that attempts to identify the most fundamental units first, and then to infer higher-order structures from them. Top-down parsers, on the other hand, hypothesize general parse tree structures and then consider whether the known fundamental structures are compatible with the hypothesis. See Top-down parsing and Bottom-up parsing.

Nanotechnology

Top-down and bottom-up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 in order to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom-up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications.

The top-down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top-down secondary approaches to engineer nanostructures.[4]

Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches utilize the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases.

Neuroscience and psychology


An example of top-down processing: Even though the second letter in each word is ambiguous, top-down processing allows for easy disambiguation based on the context.

These terms are also employed in neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing.[5][page needed] Typically sensory input is considered "bottom-up", and higher cognitive processes, which have more information from other sources, are considered "top-down". A bottom-up process is characterized by an absence of higher level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Beiderman, 19).[3]

According to college teaching notes written by Charles Ramskov,[who?] Rock, Neiser, and Gregory claim that top-down approach involves perception that is an active and constructive process.[6][better source needed] Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to Theoretical Synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."[7]

Conversely, psychology defines bottom-up processing as an approach wherein there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom-up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus.[8][page needed][better source needed][9] Theoretical Synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and clearly enough."[7]

Cognitively speaking, certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom-up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top-down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom-up connections.[7] Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top-down influence.[10][better source needed]

The study of visual attention provides an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom-up fashion—your attention was not contingent upon knowledge of the flower; the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object you are looking for, it is salient. This is an example of the use of top-down information.

In cognitive terms, two thinking approaches are distinguished. "Top-down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom-up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.[11]

Management and organization

In the fields of management and organization, the terms "top-down" and "bottom-up" are used to describe how decisions are made and/or how change is implemented.[12]

A "top-down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then the leader uses a planned approach to drive the changes down to the frontline staff (Stewart, Manges, Ward, 2015).[12]

A "bottom-up" approach to changes one that works from the grassroots—from a large number of people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom-up" decision. A bottom-up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers" (Stewart, Manges, Ward, 2015, p. 241).[12]

Positive aspects of top-down approaches include their efficiency and superb overview of higher levels.[12] Also, external effects can be internalized. On the negative side, if reforms are perceived to be imposed 'from above', it can be difficult for lower levels to accept them (e.g. Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g. Dubois 2002). A bottom-up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change (see Stewart, Manges, Ward, 2015).[12]

Public health

Both top-down and bottom-up approaches exist in public health. There are many examples of top-down programs, often run by governments or large inter-governmental organizations (IGOs); many of these are disease-specific or issue-specific, such as HIV control or Smallpox Eradication. Examples of bottom-up programs include many small NGOs set up to improve local access to healthcare. However, a lot of programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary health-care.

Architecture

Often, the École des Beaux-Arts school of design is said to have primarily promoted top-down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.[citation needed]

By contrast, the Bauhaus focused on bottom-up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the woodpanel carving and furniture design).

Ecology

In ecology, top-down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. In other words, such ecosystems are not controlled by productivity of the kelp but rather a top predator.

Bottom up control in ecosystems refers to ecosystems in which the nutrient supply and productivity and type of primary producers (plants and phytoplankton) control the ecosystem structure. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.

There are many different examples of these concepts. It is common for populations to be influenced by both types of control.

Nanomedicine

From Wikipedia, the free encyclopedia

Nanomedicine is the medical application of nanotechnology.[1] Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).

Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.

Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future.[2][3] The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging.[4] Nanomedicine research is receiving funding from the US National Institutes of Health Common Fund program, supporting four nanomedicine development centers.[5]

Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013.[6] As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.

Drug delivery

Nanoparticles (top), liposomes (middle), and dendrimers (bottom) are some nanomaterials being investigated for use in nanomedicine.

Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles.[7] The overall drug consumption and side-effects may be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed. Targeted drug delivery is intended to reduce the side effects of drugs with concomitant decreases in consumption and treatment expenses. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices.[8][9] A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery.[10] The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug.[citation needed]

Drug delivery systems, lipid-[11] or polymer-based nanoparticles,[12] can be designed to improve the pharmacokinetics and biodistribution of the drug.[13][14][15] However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients.[16] When designed to avoid the body's defence mechanisms,[17] nanoparticles have beneficial properties that can be used to improve drug delivery. Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility.[18] Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials[17] and the difficulty in targeting specific organs in the body. Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.[19]

Nanoparticles are under research for their potential to decrease antibiotic resistance or for various antimicrobial uses.[20][21][22] Nanoparticles might also used to circumvent multidrug resistance (MDR) mechanisms.[7]

Systems under research

Two forms of nanomedicine that have already been tested in mice and are awaiting human testing will use gold nanoshells to help diagnose and treat cancer,[23] along with liposomes as vaccine adjuvants and drug transport vehicles.[24][25] Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats.[26] Advances in Lipid nanotechnology was also instrumental in engineering medical nanodevices and novel drug delivery systems as well as in developing sensing applications.[27] Another example can be found in dendrimers and nanoporous materials. Another example is to use block co-polymers, which form micelles for drug encapsulation.[12]

Polymeric nanoparticles are a competing technology to lipidic (based mainly on Phospholipids) nanoparticles. There is an additional risk of toxicity associated with polymers not widely studied or understood. The major advantages of polymers is stability, lower cost and predictable characterisation. However, in the patient's body this very stability (slow degradation) is a negative factor. Phospholipids on the other hand are membrane lipids (already present in the body and surrounding each cell), have a GRAS (Generally Recognised As Safe) status from FDA and are derived from natural sources without any complex chemistry involved. They are not metabolised but rather absorbed by the body and the degradation products are themselves nutrients (fats or micronutrients).[citation needed]

Protein and peptides exert multiple biological actions in the human body and they have been identified as showing great promise for treatment of various diseases and disorders. These macromolecules are called biopharmaceuticals. Targeted and/or controlled delivery of these biopharmaceuticals using nanomaterials like nanoparticles[28] and Dendrimers is an emerging field called nanobiopharmaceutics, and these products are called nanobiopharmaceuticals.[citation needed]

Another highly efficient system for microRNA delivery for example are nanoparticles formed by the self-assembly of two different microRNAs deregulated in cancer.[29]

Another vision is based on small electromechanical systems; nanoelectromechanical systems are being investigated for the active release of drugs and sensors. Some potentially important applications include cancer treatment with iron nanoparticles or gold shells or cancer early diagnosis.[30] Nanotechnology is also opening up new opportunities in implantable delivery systems, which are often preferable to the use of injectable drugs, because the latter frequently display first-order kinetics (the blood concentration goes up rapidly, but drops exponentially over time). This rapid rise may cause difficulties with toxicity, and drug efficacy can diminish as the drug concentration falls below the targeted range.[citation needed]

Applications

Some nanotechnology-based drugs that are commercially available or in human clinical trials include:
  • Abraxane, approved by the U.S. Food and Drug Administration (FDA) to treat breast cancer,[31] non-small- cell lung cancer (NSCLC)[32] and pancreatic cancer,[33] is the nanoparticle albumin bound paclitaxel.
  • Doxil was originally approved by the FDA for the use on HIV-related Kaposi's sarcoma. It is now being used to also treat ovarian cancer and multiple myeloma. The drug is encased in liposomes, which helps to extend the life of the drug that is being distributed. Liposomes are self-assembling, spherical, closed colloidal structures that are composed of lipid bilayers that surround an aqueous space. The liposomes also help to increase the functionality and it helps to decrease the damage that the drug does to the heart muscles specifically.[34]
  • Onivyde, liposome encapsulated irinotecan to treat metastatic pancreatic cancer, was approved by FDA in October 2015.[35]
  • C-dots (Cornell dots) are the smallest silica-based nanoparticles with the size <10 a="" are="" dye="" href="https://en.wikipedia.org/wiki/Fluorescence" infused="" light="" nbsp="" nm.="" organic="" particles="" the="" title="Fluorescence" up="" which="" will="" with="">fluorescence
. Clinical trial is underway since 2011 to use the C-dots as diagnostic tool to assist surgeons to identify the location of tumor cells.[36]
  • An early phase clinical trial using the platform of ‘Minicell’ nanoparticle for drug delivery have been tested on patients with advanced and untreatable cancer. Built from the membranes of mutant bacteria, the minicells were loaded with paclitaxel and coated with cetuximab, antibodies that bind the epidermal growth factor receptor (EGFR) which is often overexpressed in a number of cancers, as a 'homing' device to the tumor cells. The tumor cells recognize the bacteria from which the minicells have been derived, regard it as invading microorganism and engulf it. Once inside, the payload of anti-cancer drug kills the tumor cells. Measured at 400 nanometers, the minicell is bigger than synthetic particles developed for drug delivery. The researchers indicated that this larger size gives the minicells a better profile in side-effects because the minicells will preferentially leak out of the porous blood vessels around the tumor cells and do not reach the liver, digestive system and skin. This Phase 1 clinical trial demonstrated that this treatment is well tolerated by the patients. As a platform technology, the minicell drug delivery system can be used to treat a number of different cancers with different anti-cancer drugs with the benefit of lower dose and less side-effects.[37][38]
  • In 2014, a Phase 3 clinical trial for treating inflammation and pain after cataract surgery, and a Phase 2 trial for treating dry eye disease were initiated using nanoparticle loteprednol etabonate.[39] In 2015, the product, KPI-121 was found to produce statistically significant positive results for the post-surgery treatment.[40]

  • Cancer

    Nanoparticles have high surface area to volume ratio. This allows for many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells.[47] Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system).[48] Limitations to conventional cancer chemotherapy include drug resistance, lack of selectivity, and lack of solubility.[46] Nanoparticles have the potential to overcome these problems.[41][49]

    In photodynamic therapy, a particle is placed within the body and is illuminated with light from the outside. The light gets absorbed by the particle and if the particle is metal, energy from the light will heat the particle and surrounding tissue. Light may also be used to produce high energy oxygen molecules which will chemically react with and destroy most organic molecules that are next to them (like tumors). This therapy is appealing for many reasons. It does not leave a "toxic trail" of reactive molecules throughout the body (chemotherapy) because it is directed where only the light is shined and the particles exist. Photodynamic therapy has potential for a noninvasive procedure for dealing with diseases, growth and tumors. Kanzius RF therapy is one example of such therapy (nanoparticle hyperthermia) .[citation needed] Also, gold nanoparticles have the potential to join numerous therapeutic functions into a single platform, by targeting specific tumor cells, tissues and organs.[50][51]

    Imaging

    In vivo imaging is another area where tools and devices are being developed.[52] Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. In cardiovascular imaging, nanoparticles have potential to aid visualization of blood pooling, ischemia, angiogenesis, atherosclerosis, and focal areas where inflammation is present.[52]

    The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging.[7] Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal.These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements, but this concern may be addressed by use of fluorescent dopants.[53]

    Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes.[53] The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles[54] into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.[55]

    Sensing

    Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. In particular silica nanoparticles are inert from the photophysical point of view and might accumulate a large number of dye(s) within the nanoparticle shell.[28] Gold nanoparticles tagged with short segments of DNA can be used for detection of genetic sequence in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.[citation needed]

    Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood.[56] Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.[57]

    Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better than in a conventional laboratory test. These devices that are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker.[30] The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device.[58] Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individual’s tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.[59]

    Blood purification

    Magnetic micro particles are proven research instruments for the separation of cells and proteins from complex media. The technology is available under the name Magnetic-activated cell sorting or Dynabeads among others. More recently it was shown in animal models that magnetic nanoparticles can be used for the removal of various noxious compounds including toxins, pathogens, and proteins from whole blood in an extracorporeal circuit similar to dialysis.[60][61] In contrast to dialysis, which works on the principle of the size related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification with nanoparticles allows specific targeting of substances. Additionally larger compounds which are commonly not dialyzable can be removed.[citation needed]

    The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties.[62] Binding agents such as proteins,[61] antibodies,[60] antibiotics,[63] or synthetic ligands[64] are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate. Applying an external magnetic field gradient allows exerting a force on the nanoparticles. Hence the particles can be separated from the bulk fluid, thereby cleaning it from the contaminants.[65][66]

    The small size (< 100 nm) and large surface area of functionalized nanomagnets leads to advantageous properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. These advantages are high loading and accessibility of the binding agents, high selectivity towards the target compound, fast diffusion, small hydrodynamic resistance, and low dosage.[67]

    This approach offers new therapeutic possibilities for the treatment of systemic infections such as sepsis by directly removing the pathogen. It can also be used to selectively remove cytokines or endotoxins[63] or for the dialysis of compounds which are not accessible by traditional dialysis methods. However the technology is still in a preclinical phase and first clinical trials are not expected before 2017.[68]

    Tissue engineering

    Nanotechnology may be used as part of tissue engineering to help reproduce or repair or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites.[69][70] Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants.[citation needed]

    For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery.[71] Another example is nanonephrology, the use of nanomedicine on the kidney.

    Medical devices

    Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to be joined and linked to the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer. A refuelable strategy implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a nonrefuelable strategy implies that all power is drawn from internal energy storage which would stop when all energy is drained. A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed that uses glucose from biofluids including human blood and watermelons.[72] One limitation to this innovation is the fact that electrical interference or leakage or overheating from power consumption is possible. The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system.[73]

    Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities.[1][73][74][75] Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999.[1] Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.[76] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[77]

    Operator (computer programming)

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...