Search This Blog

Wednesday, July 6, 2022

Cognitive science

From Wikipedia, the free encyclopedia

Figure illustrating the fields that contributed to the birth of cognitive science, including linguistics, neuroscience, artificial intelligence, philosophy, anthropology, and psychology

Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning. The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution.

History

The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); Modernist philosophers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke, rejected scholasticism while mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.

The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks.

Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation.

The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition.

In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order.

The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego.

In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI".

Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input.

Principles

Levels of analysis

A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr gave a famous description of three levels of analysis:

  1. The computational theory, specifying the goals of the computation;
  2. Representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and
  3. The hardware implementation, or how algorithm and representation may be physically realized.

Interdisciplinary nature

Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural.

Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition.

Cognitive science: the term

The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth conditional semantics.

The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.

Scope

Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states.

Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field.

Artificial intelligence

Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See § Computational modeling.)

There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain.

Attention

Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.

Bodily processes related to cognition

Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science.

Knowledge and processing of language

A well known example of a phrase structure tree. This is one way of representing human language that shows how different components are organized hierarchically.
 

The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?

The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction.

The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration.

Learning and development

Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place.

A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience.

Memory

Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes).

Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory .

Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")?

Perception and action

The Necker cube, an example of an optical illusion
 
An optical illusion. The square A is exactly the same shade of gray as square B. See checker shadow illusion.
 

Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions.

The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception.

Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action.

Consciousness

Consciousness is the awareness of external objects and experiences within oneself. This helps the mind with having the ability to experience or feel a sense of self.

Research methods

Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory.

Behavioral experiments

In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant).

  • Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing.
  • Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include:
    • sameness judgments for colors, tones, textures, etc.
    • threshold differences for colors, tones, textures, etc.
  • Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed.

Brain imaging

Image of the human head with the brain. The arrow indicates the position of the hypothalamus.

Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience.

  • Single-photon emission computed tomography and positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution.
  • Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution.
  • Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution.
  • Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains.
  • Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields.

Computational modeling

An artificial neural network with two layers.

Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid.

  • Symbolic modeling evolved from the computer science paradigms using the technologies of knowledge-based systems, as well as a philosophical perspective (e.g. "Good Old-Fashioned Artificial Intelligence" (GOFAI)). They were developed by the first cognitive researchers and later used in information engineering for expert systems. Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision-making, symbolic cognitive modeling has been extended to the socio-cognitive approach, including social and organizational cognition, interrelated with a sub-symbolic non-conscious layer.
  • Subsymbolic modeling includes connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and its problem-solving capacity derives from the connections between them. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, these models lack explanatory powers because, even in systems endowed with simple connection rules, the emerging high complexity makes them less interpretable at the connection-level than they apparently are at the macroscopic level.
  • Other approaches gaining in popularity include (1) dynamical systems theory, (2) mapping symbolic models onto connectionist models (Neural-symbolic integration or hybrid intelligent systems), and (3) and Bayesian models, which are often drawn from machine learning.

All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.).

Neurobiological methods

Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system.

Key findings

Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect.

Notable researchers

Name Year of birth Year of contribution Contribution(s)
David Chalmers 1966 1995 Dualism, hard problem of consciousness
Daniel Dennett 1942 1987 Offered a computational systems perspective (Multiple drafts model)
John Searle 1932 1980 Chinese room
Douglas Hofstadter 1945 1979 Gödel, Escher, Bach
Jerry Fodor 1935 1968, 1975 Functionalism
Alan Baddeley 1934 1974 Baddeley's model of working memory
Marvin Minsky 1927 1970s, early 1980s Wrote computer programs in languages such as LISP to attempt to formally characterize the steps that human beings go through, such as making decisions and solving problems
Christopher Longuet-Higgins 1923 1973 Coined the term cognitive science
Noam Chomsky 1928 1959 Published a review of B.F. Skinner's book Verbal Behavior which began cognitivism against then-dominant behaviorism
George Miller 1920 1956 Wrote about the capacities of human thinking through mental representations
Herbert Simon 1916 1956 Co-created Logic Theory Machine and General Problem Solver with Allen Newell, EPAM (Elementary Perceiver and Memorizer) theory, organizational decision-making
John McCarthy 1927 1955 Coined the term artificial intelligence and organized the famous Dartmouth conference in Summer 1956, which started AI as a field
McCulloch and Pitts
1930s–1940s Developed early artificial neural networks
J. C. R. Licklider 1915
Established MIT Sloan School of Management
Lila R. Gleitman 1929 1970s-2010s Wide-ranging contributions to understanding the cognition of language acquisition, including syntactic bootstrapping theory
Eleanor Rosch 1938 1976 Development of the Prototype Theory of categorisation
Philip N. Johnson-Laird 1936 1980 Introduced the idea of mental models in cognitive science
Dedre Gentner 1944 1983 Development of the Structure-mapping Theory of analogical reasoning
Allen Newell 1927 1990 Development of the field of Cognitive architecture in cognitive modelling and artificial intelligence
Annette Karmiloff-Smith 1938 1992 Integrating neuroscience and computational modelling into theories of cognitive development
David Marr (neuroscientist) 1945 1990 Proponent of the Three-Level Hypothesis of levels of analysis of computational systems
Peter Gärdenfors 1949 2000 Creator of the conceptual space framework used in cognitive modelling and artificial intelligence.
Linda B. Smith 1951 1993 Together with Esther Thelen, created a dynamical systems approach to understanding cognitive development

Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism.

Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought.

In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent.

Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association.

Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird.

Epistemics

Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge.

Christopher Longuet-Higgins has defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated." In his 1978 essay "Epistemics: The Regulative Theory of Cognition", Alvin I. Goldman claims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs.

In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics.

Microbial biodegradation

From Wikipedia, the free encyclopedia

Microbial biodegradation is the use of bioremediation and biotransformation methods to harness the naturally occurring ability of microbial xenobiotic metabolism to degrade, transform or accumulate environmental pollutants, including hydrocarbons (e.g. oil), polychlorinated biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), heterocyclic compounds (such as pyridine or quinoline), pharmaceutical substances, radionuclides and metals.

Interest in the microbial biodegradation of pollutants has intensified in recent years, and recent major methodological breakthroughs have enabled detailed genomic, metagenomic, proteomic, bioinformatic and other high-throughput analyses of environmentally relevant microorganisms, providing new insights into biodegradative pathways and the ability of organisms to adapt to changing environmental conditions.

Biological processes play a major role in the removal of contaminants and take advantage of the catabolic versatility of microorganisms to degrade or convert such compounds. In environmental microbiology, genome-based global studies are increasing the understanding of metabolic and regulatory networks, as well as providing new information on the evolution of degradation pathways and molecular adaptation strategies to changing environmental conditions.

Aerobic biodegradation of pollutants

The increasing amount of bacterial genomic data provides new opportunities for understanding the genetic and molecular bases of the degradation of organic pollutants. Aromatic compounds are among the most persistent of these pollutants and lessons can be learned from the recent genomic studies of Burkholderia xenovorans LB400 and Rhodococcus sp. strain RHA1, two of the largest bacterial genomes completely sequenced to date. These studies have helped expand our understanding of bacterial catabolism, non-catabolic physiological adaptation to organic compounds, and the evolution of large bacterial genomes. First, the metabolic pathways from phylogenetically diverse isolates are very similar with respect to overall organization. Thus, as originally noted in pseudomonads, a large number of "peripheral aromatic" pathways funnel a range of natural and xenobiotic compounds into a restricted number of "central aromatic" pathways. Nevertheless, these pathways are genetically organized in genus-specific fashions, as exemplified by the b-ketoadipate and Paa pathways. Comparative genomic studies further reveal that some pathways are more widespread than initially thought. Thus, the Box and Paa pathways illustrate the prevalence of non-oxygenolytic ring-cleavage strategies in aerobic aromatic degradation processes. Functional genomic studies have been useful in establishing that even organisms harboring high numbers of homologous enzymes seem to contain few examples of true redundancy. For example, the multiplicity of ring-cleaving dioxygenases in certain rhodococcal isolates may be attributed to the cryptic aromatic catabolism of different terpenoids and steroids. Finally, analyses have indicated that recent genetic flux appears to have played a more significant role in the evolution of some large genomes, such as LB400's, than others. However, the emerging trend is that the large gene repertoires of potent pollutant degraders such as LB400 and RHA1 have evolved principally through more ancient processes. That this is true in such phylogenetically diverse species is remarkable and further suggests the ancient origin of this catabolic capacity.

Anaerobic biodegradation of pollutants

Anaerobic microbial mineralization of recalcitrant organic pollutants is of great environmental significance and involves intriguing novel biochemical reactions. In particular, hydrocarbons and halogenated compounds have long been doubted to be degradable in the absence of oxygen, but the isolation of hitherto unknown anaerobic hydrocarbon-degrading and reductively dehalogenating bacteria during the last decades provided ultimate proof for these processes in nature. While such research involved mostly chlorinated compounds initially, recent studies have revealed reductive dehalogenation of bromine and iodine moieties in aromatic pesticides. Other reactions, such as biologically induced abiotic reduction by soil minerals, has been shown to deactivate relatively persistent aniline-based herbicides far more rapidly than observed in aerobic environments. Many novel biochemical reactions were discovered enabling the respective metabolic pathways, but progress in the molecular understanding of these bacteria was rather slow, since genetic systems are not readily applicable for most of them. However, with the increasing application of genomics in the field of environmental microbiology, a new and promising perspective is now at hand to obtain molecular insights into these new metabolic properties. Several complete genome sequences were determined during the last few years from bacteria capable of anaerobic organic pollutant degradation. The ~4.7 Mb genome of the facultative denitrifying Aromatoleum aromaticum strain EbN1 was the first to be determined for an anaerobic hydrocarbon degrader (using toluene or ethylbenzene as substrates). The genome sequence revealed about two dozen gene clusters (including several paralogs) coding for a complex catabolic network for anaerobic and aerobic degradation of aromatic compounds. The genome sequence forms the basis for current detailed studies on regulation of pathways and enzyme structures. Further genomes of anaerobic hydrocarbon degrading bacteria were recently completed for the iron-reducing species Geobacter metallireducens (accession nr. NC_007517) and the perchlorate-reducing Dechloromonas aromatica (accession nr. NC_007298), but these are not yet evaluated in formal publications. Complete genomes were also determined for bacteria capable of anaerobic degradation of halogenated hydrocarbons by halorespiration: the ~1.4 Mb genomes of Dehalococcoides ethenogenes strain 195 and Dehalococcoides sp. strain CBDB1 and the ~5.7 Mb genome of Desulfitobacterium hafniense strain Y51. Characteristic for all these bacteria is the presence of multiple paralogous genes for reductive dehalogenases, implicating a wider dehalogenating spectrum of the organisms than previously known. Moreover, genome sequences provided unprecedented insights into the evolution of reductive dehalogenation and differing strategies for niche adaptation.

Recently, it has become apparent that some organisms, including Desulfitobacterium chlororespirans, originally evaluated for halorespiration on chlorophenols, can also use certain brominated compounds, such as the herbicide bromoxynil and its major metabolite as electron acceptors for growth. Iodinated compounds may be dehalogenated as well, though the process may not satisfy the need for an electron acceptor.

Bioavailability, chemotaxis, and transport of pollutants

Bioavailability, or the amount of a substance that is physiochemically accessible to microorganisms is a key factor in the efficient biodegradation of pollutants. O'Loughlin et al. (2000) showed that, with the exception of kaolinite clay, most soil clays and cation exchange resins attenuated biodegradation of 2-picoline by Arthrobacter sp. strain R1, as a result of adsorption of the substrate to the clays. Chemotaxis, or the directed movement of motile organisms towards or away from chemicals in the environment is an important physiological response that may contribute to effective catabolism of molecules in the environment. In addition, mechanisms for the intracellular accumulation of aromatic molecules via various transport mechanisms are also important.

Oil biodegradation

General overview of microbial biodegradation of petroleum oil by microbial communities. Some microorganisms, such as A. borkumensis, are able to use hydrocarbons as their source for carbon in metabolism. They are able to oxidize the environmentally harmful hydrocarbons while producing harmless products, following the general equation CnHn + O2 → H2O + CO2. In the figure, carbon is represented as yellow circles, oxygen as pink circles, and hydrogen as blue circles. This type of special metabolism allows these microbes to thrive in areas affected by oil spills and are important in the elimination of environmental pollutants.

Petroleum oil contains aromatic compounds that are toxic to most life forms. Episodic and chronic pollution of the environment by oil causes major disruption to the local ecological environment. Marine environments in particular are especially vulnerable, as oil spills near coastal regions and in the open sea are difficult to contain and make mitigation efforts more complicated. In addition to pollution through human activities, approximately 250 million litres of petroleum enter the marine environment every year from natural seepages. Despite its toxicity, a considerable fraction of petroleum oil entering marine systems is eliminated by the hydrocarbon-degrading activities of microbial communities, in particular by a recently discovered group of specialists, the hydrocarbonoclastic bacteria (HCB). Alcanivorax borkumensis was the first HCB to have its genome sequenced. In addition to hydrocarbons, crude oil often contains various heterocyclic compounds, such as pyridine, which appear to be degraded by similar mechanisms to hydrocarbons.

Cholesterol biodegradation

Many synthetic steroidic compounds like some sexual hormones frequently appear in municipal and industrial wastewaters, acting as environmental pollutants with strong metabolic activities negatively affecting the ecosystems. Since these compounds are common carbon sources for many different microorganisms their aerobic and anaerobic mineralization has been extensively studied. The interest of these studies lies on the biotechnological applications of sterol transforming enzymes for the industrial synthesis of sexual hormones and corticoids. Very recently, the catabolism of cholesterol has acquired a high relevance because it is involved in the infectivity of the pathogen Mycobacterium tuberculosis (Mtb). Mtb causes tuberculosis disease, and it has been demonstrated that novel enzyme architectures have evolved to bind and modify steroid compounds like cholesterol in this organism and other steroid-utilizing bacteria as well. These new enzymes might be of interest for their potential in the chemical modification of steroid substrates.

Analysis of waste biotreatment

Sustainable development requires the promotion of environmental management and a constant search for new technologies to treat vast quantities of wastes generated by increasing anthropogenic activities. Biotreatment, the processing of wastes using living organisms, is an environmentally friendly, relatively simple and cost-effective alternative to physico-chemical clean-up options. Confined environments, such as bioreactors, have been engineered to overcome the physical, chemical and biological limiting factors of biotreatment processes in highly controlled systems. The great versatility in the design of confined environments allows the treatment of a wide range of wastes under optimized conditions. To perform a correct assessment, it is necessary to consider various microorganisms having a variety of genomes and expressed transcripts and proteins. A great number of analyses are often required. Using traditional genomic techniques, such assessments are limited and time-consuming. However, several high-throughput techniques originally developed for medical studies can be applied to assess biotreatment in confined environments.

Metabolic engineering and biocatalytic applications

The study of the fate of persistent organic chemicals in the environment has revealed a large reservoir of enzymatic reactions with a large potential in preparative organic synthesis, which has already been exploited for a number of oxygenases on pilot and even on industrial scale. Novel catalysts can be obtained from metagenomic libraries and DNA sequence based approaches. Our increasing capabilities in adapting the catalysts to specific reactions and process requirements by rational and random mutagenesis broadens the scope for application in the fine chemical industry, but also in the field of biodegradation. In many cases, these catalysts need to be exploited in whole cell bioconversions or in fermentations, calling for system-wide approaches to understanding strain physiology and metabolism and rational approaches to the engineering of whole cells as they are increasingly put forward in the area of systems biotechnology and synthetic biology.

Fungal biodegradation

In the ecosystem, different substrates are attacked at different rates by consortia of organisms from different kingdoms. Aspergillus and other moulds play an important role in these consortia because they are adept at recycling starches, hemicelluloses, celluloses, pectins and other sugar polymers. Some aspergilli are capable of degrading more refractory compounds such as fats, oils, chitin, and keratin. Maximum decomposition occurs when there is sufficient nitrogen, phosphorus and other essential inorganic nutrients. Fungi also provide food for many soil organisms.

For Aspergillus the process of degradation is the means of obtaining nutrients. When these moulds degrade human-made substrates, the process usually is called biodeterioration. Both paper and textiles (cotton, jute, and linen) are particularly vulnerable to Aspergillus degradation. Our artistic heritage is also subject to Aspergillus assault. To give but one example, after Florence in Italy flooded in 1969, 74% of the isolates from a damaged Ghirlandaio fresco in the Ognissanti church were Aspergillus versicolor.

Francium

From Wikipedia, the free encyclopedia
 
Francium, 87Fr
Francium
Pronunciation/ˈfrænsiəm/ (FRAN-see-əm)
Mass number[223]
Francium in the periodic table
Hydrogen
Helium
Lithium Beryllium
Boron Carbon Nitrogen Oxygen Fluorine Neon
Sodium Magnesium
Aluminium Silicon Phosphorus Sulfur Chlorine Argon
Potassium Calcium
Scandium Titanium Vanadium Chromium Manganese Iron Cobalt Nickel Copper Zinc Gallium Germanium Arsenic Selenium Bromine Krypton
Rubidium Strontium

Yttrium Zirconium Niobium Molybdenum Technetium Ruthenium Rhodium Palladium Silver Cadmium Indium Tin Antimony Tellurium Iodine Xenon
Caesium Barium Lanthanum Cerium Praseodymium Neodymium Promethium Samarium Europium Gadolinium Terbium Dysprosium Holmium Erbium Thulium Ytterbium Lutetium Hafnium Tantalum Tungsten Rhenium Osmium Iridium Platinum Gold Mercury (element) Thallium Lead Bismuth Polonium Astatine Radon
Francium Radium Actinium Thorium Protactinium Uranium Neptunium Plutonium Americium Curium Berkelium Californium Einsteinium Fermium Mendelevium Nobelium Lawrencium Rutherfordium Dubnium Seaborgium Bohrium Hassium Meitnerium Darmstadtium Roentgenium Copernicium Nihonium Flerovium Moscovium Livermorium Tennessine Oganesson
Cs

Fr

(Uue)
radonfranciumradium
Atomic number (Z)87
Groupgroup 1: hydrogen and alkali metals
Periodperiod 7
Block  s-block
Electron configuration[Rn] 7s1
Electrons per shell2, 8, 18, 32, 18, 8, 1
Physical properties
Phase at STPsolid
Melting point300 K ​(27 °C, ​81 °F)
Boiling point950 K ​(677 °C, ​1251 °F)
Density (near r.t.)2.48 g/cm3 (estimated)
Vapor pressure (extrapolated)
P (Pa) 1 10 100 1 k 10 k 100 k
at T (K) 404 454 519 608 738 946
Atomic properties
Oxidation states+1 (a strongly basic oxide)
ElectronegativityPauling scale: >0.79
Ionization energies
  • 1st: 393 kJ/mol

Covalent radius260 pm (extrapolated)
Van der Waals radius348 pm (extrapolated)
Other properties
Natural occurrencefrom decay
Crystal structurebody-centered cubic (bcc)
Body-centered cubic crystal structure for francium

(extrapolated)
Thermal conductivity15 W/(m⋅K) (extrapolated)
Electrical resistivity3 µΩ⋅m (calculated)
Magnetic orderingParamagnetic
CAS Number7440-73-5
History
Namingafter France, homeland of the discoverer
Discovery and first isolationMarguerite Perey (1939)
Main isotopes of francium
Iso­tope Abun­dance Half-life (t1/2) Decay mode Pro­duct
212Fr syn 20.0 min β+ 212Rn
α 208At
221Fr trace 4.8 min α 217At
222Fr syn 14.2 min β 222Ra
223Fr trace 22.00 min β 223Ra
α 219At

Francium is a chemical element with the symbol Fr and atomic number 87. It is extremely radioactive; its most stable isotope, francium-223 (originally called actinium K after the natural decay chain it appears in), has a half-life of only 22 minutes. It is the second-most electropositive element, behind only caesium, and is the second rarest naturally occurring element (after astatine). The isotopes of francium decay quickly into astatine, radium, and radon. The electronic structure of a francium atom is [Rn] 7s1, and so the element is classed as an alkali metal.

Bulk francium has never been seen. Because of the general appearance of the other elements in its periodic table column, it is presumed that francium would appear as a highly reactive metal, if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable, since the extreme heat of decay resulting from its short half-life would immediately vaporize any viewable quantity of the element.

Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. Prior to its discovery, it was referred to as eka-caesium or ekacaesium because of its conjectured existence below caesium in the periodic table. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium and thorium ores, where the isotope francium-223 continually forms and decays. As little as 20–30 g (one ounce) exists at any given time throughout the Earth's crust; aside from francium-221, its other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms.

Characteristics

Francium is one of the most unstable of the naturally occurring elements: its longest-lived isotope, francium-223, has a half-life of only 22 minutes. The only comparable element is astatine, whose most stable natural isotope, astatine-219 (the alpha daughter of francium-223), has a half-life of 56 seconds, although synthetic astatine-210 is much longer-lived with a half-life of 8.1 hours. All isotopes of francium decay into astatine, radium, or radon. Francium-223 also has a shorter half-life than the longest-lived isotope of each synthetic element up to and including element 105, dubnium.

Francium is an alkali metal whose chemical properties mostly resemble those of caesium. A heavy element with a single valence electron, it has the highest equivalent weight of any element. Liquid francium—if created—should have a surface tension of 0.05092 N/m at its melting point. Francium's melting point was estimated to be around 8.0 °C (46.4 °F); a value of 27 °C (81 °F) is also often encountered. The melting point is uncertain because of the element's extreme rarity and radioactivity; a different extrapolation based on Dmitri Mendeleev's method gave 20 ± 1.5 °C (68.0 ± 2.7 °F). The estimated boiling point of 620 °C (1,148 °F) is also uncertain; the estimates 598 °C (1,108 °F) and 677 °C (1,251 °F), as well as the extrapolation from Mendeleev's method of 640 °C (1,184 °F), have also been suggested. The density of francium is expected to be around 2.48 g/cm3 (Mendeleev's method extrapolates 2.4 g/cm3).

Linus Pauling estimated the electronegativity of francium at 0.7 on the Pauling scale, the same as caesium; the value for caesium has since been refined to 0.79, but there are no experimental data to allow a refinement of the value for francium. Francium has a slightly higher ionization energy than caesium, 392.811(4) kJ/mol as opposed to 375.7041(2) kJ/mol for caesium, as would be expected from relativistic effects, and this would imply that caesium is the less electronegative of the two. Francium should also have a higher electron affinity than caesium and the Fr ion should be more polarizable than the Cs ion.

Compounds

Due to francium being very unstable, its salts are only known to a small extent. Francium coprecipitates with several caesium salts, such as caesium perchlorate, which results in small amounts of francium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. It will additionally coprecipitate with many other caesium salts, including the iodate, the picrate, the tartrate (also rubidium tartrate), the chloroplatinate, and the silicotungstate. It also coprecipitates with silicotungstic acid, and with perchloric acid, without another alkali metal as a carrier, which leads to other methods of separation.

Francium perchlorate

Francium perchlorate is produced by the reaction of francium chloride and sodium perchlorate. The francium perchlorate coprecipitates with caesium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. However, this method is unreliable in separating thallium, which also coprecipitates with caesium. Francium perchlorate's entropy is expected to be 42.7 e.u.

Francium halides

Francium halides are all soluble in water and are expected to be white solids. They are expected to be produced by the reaction of the corresponding halides. For example, francium chloride would be produced by the reaction of francium and chlorine. Francium chloride has been studied as a pathway to separate francium from other elements, by using the high vapour pressure of the compound, although francium fluoride would have a higher vapour pressure.

Other compounds

Francium nitrate, sulfate, hydroxide, carbonate, acetate, and oxalate, are all soluble in water, while the iodate, picrate, tartrate, chloroplatinate, and silicotungstate are insoluble. The insolubility of these compounds are used to extract francium from other radioactive products, such as zirconium, niobium, molybdenum, tin, antimony, the method mentioned in the section above. The CsFr molecule is predicted to have francium at the negative end of the dipole, unlike all known heterodiatomic alkali metal molecules. Francium superoxide (FrO2) is expected to have a more covalent character than its lighter congeners; this is attributed to the 6p electrons in francium being more involved in the francium–oxygen bonding.

The only double salt known of francium has the formula Fr9Bi2I9.

Isotopes

There are 34 known isotopes of francium ranging in atomic mass from 199 to 232. Francium has seven metastable nuclear isomers. Francium-223 and francium-221 are the only isotopes that occur in nature, with the former being far more common.

Francium-223 is the most stable isotope, with a half-life of 21.8 minutes, and it is highly unlikely that an isotope of francium with a longer half-life will ever be discovered or synthesized. Francium-223 is a fifth product of the uranium-235 decay series as a daughter isotope of actinium-227; thorium-227 is the more common daughter. Francium-223 then decays into radium-223 by beta decay (1.149 MeV decay energy), with a minor (0.006%) alpha decay path to astatine-219 (5.4 MeV decay energy).

Francium-221 has a half-life of 4.8 minutes. It is the ninth product of the neptunium decay series as a daughter isotope of actinium-225. Francium-221 then decays into astatine-217 by alpha decay (6.457 MeV decay energy).

The least stable ground state isotope is francium-215, with a half-life of 0.12 μs: it undergoes a 9.54 MeV alpha decay to astatine-211. Its metastable isomer, francium-215m, is less stable still, with a half-life of only 3.5 ns.

Applications

Due to its instability and rarity, there are no commercial applications for francium. It has been used for research purposes in the fields of chemistry and of atomic structure. Its use as a potential diagnostic aid for various cancers has also been explored, but this application has been deemed impractical.

Francium's ability to be synthesized, trapped, and cooled, along with its relatively simple atomic structure, has made it the subject of specialized spectroscopy experiments. These experiments have led to more specific information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels which are fairly similar to those predicted by quantum theory.

History

As early as 1870, chemists thought that there should be an alkali metal beyond caesium, with an atomic number of 87. It was then referred to by the provisional name eka-caesium. Research teams attempted to locate and isolate this missing element, and at least four false claims were made that the element had been found before an authentic discovery was made.

Erroneous and incomplete discoveries

Soviet chemist Dmitry Dobroserdov was the first scientist to claim to have found eka-caesium, or francium. In 1925, he observed weak radioactivity in a sample of potassium, another alkali metal, and incorrectly concluded that eka-caesium was contaminating the sample (the radioactivity from the sample was from the naturally occurring potassium radioisotope, potassium-40). He then published a thesis on his predictions of the properties of eka-caesium, in which he named the element russium after his home country. Shortly thereafter, Dobroserdov began to focus on his teaching career at the Polytechnic Institute of Odessa, and he did not pursue the element further.

The following year, English chemists Gerald J. F. Druce and Frederick H. Loring analyzed X-ray photographs of manganese(II) sulfate. They observed spectral lines which they presumed to be of eka-caesium. They announced their discovery of element 87 and proposed the name alkalinium, as it would be the heaviest alkali metal.

In 1930, Fred Allison of the Alabama Polytechnic Institute claimed to have discovered element 87 (in addition to 85) when analyzing pollucite and lepidolite using his magneto-optical machine. Allison requested that it be named virginium after his home state of Virginia, along with the symbols Vi and Vm. In 1934, H.G. MacPherson of UC Berkeley disproved the effectiveness of Allison's device and the validity of his discovery.

In 1936, Romanian physicist Horia Hulubei and his French colleague Yvette Cauchois also analyzed pollucite, this time using their high-resolution X-ray apparatus. They observed several weak emission lines, which they presumed to be those of element 87. Hulubei and Cauchois reported their discovery and proposed the name moldavium, along with the symbol Ml, after Moldavia, the Romanian province where Hulubei was born. In 1937, Hulubei's work was criticized by American physicist F. H. Hirsh Jr., who rejected Hulubei's research methods. Hirsh was certain that eka-caesium would not be found in nature, and that Hulubei had instead observed mercury or bismuth X-ray lines. Hulubei insisted that his X-ray apparatus and methods were too accurate to make such a mistake. Because of this, Jean Baptiste Perrin, Nobel Prize winner and Hulubei's mentor, endorsed moldavium as the true eka-caesium over Marguerite Perey's recently discovered francium. Perey took pains to be accurate and detailed in her criticism of Hulubei's work, and finally she was credited as the sole discoverer of element 87. All other previous purported discoveries of element 87 were ruled out due to francium's very limited half-life.

Perey's analysis

Eka-caesium was discovered on January 7, 1939, by Marguerite Perey of the Curie Institute in Paris, when she purified a sample of actinium-227 which had been reported to have a decay energy of 220 keV. Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one which was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, produced by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure which she later revised to 1%.

Perey named the new isotope actinium-K (it is now referred to as francium-223) and in 1946, she proposed the name catium (Cm) for her newly discovered element, as she believed it to be the most electropositive cation of the elements. Irène Joliot-Curie, one of Perey's supervisors, opposed the name due to its connotation of cat rather than cation; furthermore, the symbol coincided with that which had since been assigned to curium. Perey then suggested francium, after France. This name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1949, becoming the second element after gallium to be named after France. It was assigned the symbol Fa, but this abbreviation was revised to the current Fr shortly thereafter. Francium was the last element discovered in nature, rather than synthesized, following hafnium and rhenium. Further research into francium's structure was carried out by, among others, Sylvain Lieberman and his team at CERN in the 1970s and 1980s.

Occurrence

A shiny gray 5-centimeter piece of matter with a rough surface.
This sample of uraninite contains about 100,000 atoms (3.3×10−20 g) of francium-223 at any given time.

223Fr is the result of the alpha decay of 227Ac and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1 × 1018 uranium atoms. It is also calculated that there is a total mass of at most 30 g of francium in the Earth's crust at any given time.

Production

Francium can be synthesized by a fusion reaction when a gold-197 target is bombarded with a beam of oxygen-18 atoms from a linear accelerator in a process originally developed at the physics department of the State University of New York at Stony Brook in 1995. Depending on the energy of the oxygen beam, the reaction can yield francium isotopes with masses of 209, 210, and 211.

197Au + 18O → 209Fr + 6 n
197Au + 18O → 210Fr + 5 n
197Au + 18O → 211Fr + 4 n
A complex experimental setup featuring a horizontal glass tube placed between two copper coils.
A magneto-optical trap, which can hold neutral francium atoms for short periods of time.

The francium atoms leave the gold target as ions, which are neutralized by collision with yttrium and then isolated in a magneto-optical trap (MOT) in a gaseous unconsolidated state. Although the atoms only remain in the trap for about 30 seconds before escaping or undergoing nuclear decay, the process supplies a continual stream of fresh atoms. The result is a steady state containing a fairly constant number of atoms for a much longer time. The original apparatus could trap up to a few thousand atoms, while a later improved design could trap over 300,000 at a time. Sensitive measurements of the light emitted and absorbed by the trapped atoms provided the first experimental results on various transitions between atomic energy levels in francium. Initial measurements show very good agreement between experimental values and calculations based on quantum theory. The research project using this production method relocated to TRIUMF in 2012, where over 106 francium atoms have been held at a time, including large amounts of 209Fr in addition to 207Fr and 221Fr.

Other synthesis methods include bombarding radium with neutrons, and bombarding thorium with protons, deuterons, or helium ions.

223Fr can also be isolated from samples of its parent 227Ac, the francium being milked via elution with NH4Cl–CrO3 from an actinium-containing cation exchanger and purified by passing the solution through a silicon dioxide compound loaded with barium sulfate.

A round ball of red light surrounded by a green glow
Image of light emitted by a sample of 200,000 francium atoms in a magneto-optical trap
 
A small white spot in the middle surrounded by a red circle. There is a yellow ring outside the red circle, a green circle beyond the yellow ring and a blue circle surrounding all the other circles.
Heat image of 300,000 francium atoms in a magneto-optical trap, around 13 nanograms

In 1996, the Stony Brook group trapped 3000 atoms in their MOT, which was enough for a video camera to capture the light given off by the atoms as they fluoresce. Francium has not been synthesized in amounts large enough to weigh.

Scanning probe microscopy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Scanning_probe_microscopy Scanning probe microscopy ( SPM ) is a branch...