Search This Blog

Saturday, July 14, 2018

Strategies for Engineered Negligible Senescence

From Wikipedia, the free encyclopedia
 
Strategies for Engineered Negligible Senescence (SENS) is the term coined by British biogerontologist Aubrey de Grey for the diverse range of regenerative medical therapies, either planned or currently in development, for the periodical repair of all age-related damage to human tissue with the ultimate purpose of maintaining a state of negligible senescence in the patient, thereby postponing age-associated disease for as long as the therapies are reapplied.

The term "negligible senescence" was first used in the early 1990s by professor Caleb Finch to describe organisms such as lobsters and hydras, which do not show symptoms of aging. The term "engineered negligible senescence" first appeared in print in Aubrey de Grey's 1999 book The Mitochondrial Free Radical Theory of Aging,[3] and was later prefaced with the term "strategies" in the article Time to Talk SENS: Critiquing the Immutability of Human Aging[4] De Grey called SENS a "goal-directed rather than curiosity-driven"[5] approach to the science of aging, and "an effort to expand regenerative medicine into the territory of aging".[6] To this end, SENS identifies seven categories of aging "damage" and a specific regenerative medical proposal for treating each.

While many biogerontologists find it "worthy of discussion"[7][8] and SENS conferences feature important research in the field,[9][10] some contend that the ultimate goals of de Grey's programme are too speculative given the current state of technology, referring to it as "fantasy rather than science".[11][12]

Framework

The arrows with flat heads are a notation meaning "inhibits," used in the literature of gene expression and gene regulation.

The ultimate objective of SENS is the eventual elimination of age-related diseases and infirmity by repeatedly reducing the state of senescence in the organism. The SENS project consists in implementing a series of periodic medical interventions designed to repair, prevent or render irrelevant all the types of molecular and cellular damage that cause age-related pathology and degeneration, in order to avoid debilitation and death from age-related causes.[2]

De Grey defines aging as "the set of accumulated side effects from metabolism that eventually kills us", and, more specifically, as follows: "a collection of cumulative changes to the molecular and cellular structure of an adult organism, which result in essential metabolic processes, but which also, once they progress far enough, increasingly disrupt metabolism, resulting in pathology and death."[13] He adds: "geriatrics is the attempt to stop damage from causing pathology; traditional gerontology is the attempt to stop metabolism from causing damage; and the SENS (engineering) approach is to eliminate the damage periodically, so keeping its abundance below the level that causes any pathology." The SENS approach to biomedical gerontology is thus distinctive because of its emphasis on tissue rejuvenation rather than attempting to slow the aging process.

By enumerating the various differences between young and old tissue identified by the science of biogerontology, a 'damage' report was drawn, which in turn formed the basis of the SENS strategy. The results fell into seven main categories of 'damage', seven alterations whose reversal would constitute negligible senescence:
  1. cell loss or atrophy (without replacement),[4][13][14]
  2. oncogenic nuclear mutations and epimutations,[15][16][17]
  3. cell senescence (Death-resistant cells),[18][19]
  4. mitochondrial mutations,[20][21]
  5. Intracellular junk or junk inside cells (lysosomal aggregates),[22][23]
  6. extracellular junk or junk outside cells (extracellular aggregates),[18][19]
  7. random extracellular cross-linking.[18][19]
For each of these areas SENS offers at least one strategy, with a research and a clinical component. The clinical component is required because in some of the proposed therapies, feasibility has already been proven, but not completely applied and approved for human trials. These strategies do not presuppose that the underlying metabolic mechanisms of aging be fully understood, only that we take into account the form senescence takes as directly observable to science, and described in scientific literature..

Types of aging damage and treatment schemes

Nuclear mutations/epimutations—OncoSENS

These are changes to the nuclear DNA (nDNA), or to proteins which bind to the nDNA. Certain mutations can lead to cancer.

This would need to be corrected in order to prevent or cure cancer. SENS focuses on a strategy called "whole-body interdiction of lengthening telomeres" (WILT), which would be made possible by periodic regenerative medicine treatments.

Mitochondrial mutations—MitoSENS

Mitochondria are components in our cells that are important for energy production. Because of the highly oxidative environment in mitochondria and their lack of the sophisticated repair systems, mitochondrial mutations are believed to be a major cause of progressive cellular degeneration.

This would be corrected by allotopic expression—copying the DNA for mitochondria completely within the cellular nucleus, where it is better protected. De Grey argues that experimental evidence demonstrates that the operation is feasible, however, a 2003 study showed that some mitochondrial proteins are too hydrophobic to survive the transport from the cytoplasm to the mitochondria.[24]

Intracellular junk—LysoSENS

Our cells are constantly breaking down proteins and other molecules that are no longer useful or which can be harmful. Those molecules which can’t be digested accumulate as junk inside our cells, which is detected in the form of lipofuscin granules. Atherosclerosis, macular degeneration, liver spots on the skin and all kinds of neurodegenerative diseases (such as Alzheimer's disease) are associated with this problem.

Junk inside cells might be removed by adding new enzymes to the cell's natural digestion organ, the lysosome. These enzymes would be taken from bacteria, molds and other organisms that are known to completely digest animal bodies.

Extracellular junk—AmyloSENS

Harmful junk protein can accumulate outside of our cells. Junk here means useless things accumulated by a body, but which cannot be digested or removed by its processes, such as the amyloid plaques characteristic of Alzheimer's disease and other amyloidoses.

Junk outside cells might be removed by enhanced phagocytosis (the normal process used by the immune system), and small drugs able to break chemical beta-bonds. The large junk in this class can be removed surgically.

Cell loss and atrophy—RepleniSENS

Some of the cells in our bodies cannot be replaced, or can be only replaced very slowly—more slowly than they die. This decrease in cell number affects some of the most important tissues of the body. Muscle cells are lost in skeletal muscles and the heart, causing them to become frailer with age. Loss of neurons in the substantia nigra causes Parkinson's disease, while loss of immune cells impairs the immune system.

This can be partly corrected by therapies involving exercise and growth factors, but stem cell therapy, regenerative medicine and tissue engineering are almost certainly required for any more than just partial replacement of lost cells.

Cell senescence—ApoptoSENS

Senescence is a phenomenon where the cells are no longer able to divide, but also do not die and let others divide. They may also do other harmful things, like secreting proteins. Degeneration of joints, immune senescence, accumulation of visceral fat and type 2 diabetes are caused by this. Cells sometimes enter a state of resistance to signals sent, as part of a process called apoptosis, to instruct cells to destroy themselves.

Cells in this state could be eliminated by forcing them to apoptose (via suicide genes, vaccines, or recently discovered senolytic agents), and healthy cells would multiply to replace them.

Extracellular crosslinks—GlycoSENS

Cells are held together by special linking proteins. When too many cross-links form between cells in a tissue, the tissue can lose its elasticity and cause problems including arteriosclerosis, presbyopia and weakened skin texture. These are chemical bonds between structures that are part of the body, but not within a cell. In senescent people many of these become brittle and weak.

SENS proposes to further develop small-molecular drugs and enzymes to break links caused by sugar-bonding, known as advanced glycation endproducts, and other common forms of chemical linking.

Scientific controversy

While some fields mentioned as branches of SENS are broadly supported by the medical research community, e.g., stem cell research (RepleniSENS), anti-Alzheimers research (AmyloSENS) and oncogenomics (OncoSENS), the SENS programme as a whole has been a highly controversial proposal, with many critics arguing that the SENS agenda is fanciful and the highly complicated biomedical phenomena involved in the aging process contain too many unknowns for SENS to be fully implementable in the foreseeable future. Cancer may well deserve special attention as an aging-associated disease (OncoSENS), but the SENS claim that nuclear DNA damage only matters for aging because of cancer has been challenged in the literature[25] as well as by material in the article DNA damage theory of aging.

In November 2005, 28 biogerontologists published a statement of criticism in EMBO Reports, "Science fact and the SENS agenda: what can we reasonably expect from ageing research?,"[26] arguing "each one of the specific proposals that comprise the SENS agenda is, at our present stage of ignorance, exceptionally optimistic,"[26] and that some of the specific proposals "will take decades of hard work [to be medically integrated], if [they] ever prove to be useful."[26] The researchers argue that while there is "a rationale for thinking that we might eventually learn how to postpone human illnesses to an important degree,"[26] increased basic research, rather than the goal-directed approach of SENS, is presently the scientifically appropriate goal. This article was written in response to a July 2005 EMBO Reports article previously published by de Grey[27] and a response from de Grey was published in the same November issue.[28] De Grey summarizes these events in "The biogerontology research community's evolving view of SENS," published on the Methuselah Foundation website.[29]

In 2012, Colin Blakemore criticised Aubrey de Grey, but not SENS specifically, in a debate hosted at the Oxford University Scientific Society.[citation needed]

More recently, biogerontologist Marios Kyriazis has sharply criticised the clinical applicability of SENS[30][31] claiming that such therapies, even if developed in the laboratory, would be practically unusable by the general public.[32] De Grey responded to one such criticism.[33]

Technology Review controversy

In February 2005, Technology Review, which is owned by the Massachusetts Institute of Technology, published an article by Sherwin Nuland, a Clinical Professor of Surgery at Yale University and the author of "How We Die",[34] that drew a skeptical portrait of SENS, at the time de Grey was a computer associate in the Flybase Facility of the Department of Genetics at the University of Cambridge. The April 2005 issue of Technology Review contained a reply by Aubrey de Grey[35] and numerous comments from readers.[36]

During June 2005, David Gobel, CEO and Co-founder of Methuselah Foundation offered Technology Review $20,000 to fund a prize competition to publicly clarify the viability of the SENS approach. In July 2005, Pontin announced a $20,000 prize, funded 50/50 by Methuselah Foundation and MIT Technology Review, open to any molecular biologist, with a record of publication in biogerontology, who could prove that the alleged benefits of SENS were "so wrong that it is unworthy of learned debate."[37] Technology Review received five submissions to its Challenge. In March 2006, Technology Review announced that it had chosen a panel of judges for the Challenge: Rodney Brooks, Anita Goel, Nathan Myhrvold, Vikram Sheel Kumar, and Craig Venter.[38] Three of the five submissions met the terms of the prize competition. They were published by Technology Review on June 9, 2006. Accompanying the three submissions were rebuttals by de Grey, and counter-responses to de Grey's rebuttals. On July 11, 2006, Technology Review published the results of the SENS Challenge.[7][39]

In the end, no one won the $20,000 prize. The judges felt that no submission met the criterion of the challenge and discredited SENS, although they unanimously agreed that one submission, by Preston Estep and his colleagues, was the most eloquent. Craig Venter succinctly expressed the prevailing opinion: "Estep et al. ... have not demonstrated that SENS is unworthy of discussion, but the proponents of SENS have not made a compelling case for it."[7] Summarizing the judges' deliberations, Pontin wrote that SENS is "highly speculative" and that many of its proposals could not be reproduced with the scientific technology of that period.[clarification needed] Myhrvold described SENS as belonging to a kind of "antechamber of science" where they wait until technology and scientific knowledge advance to the point where it can be tested.[7][8] In a letter of dissent dated July 11, 2006 in Technology Review, Estep et al. criticized the ruling of the judges.

Social and economic implications

Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes.[40] In industrialized nations, the proportion is much higher, reaching 90%.[40]

De Grey and other scientists in the general field have argued that the costs of a rapidly growing aging population will increase to the degree that the costs of an accelerated pace of aging research are easy to justify in terms of future costs avoided. Olshansky et al. 2006 argue, for example, that the total economic cost of Alzheimer's disease in the US alone will increase from $80–100 billion today to more than $1 trillion in 2050.[41] "Consider what is likely to happen if we don't [invest further in aging research]. Take, for instance, the impact of just one age-related disorder, Alzheimer disease (AD). For no other reason than the inevitable shifting demographics, the number of Americans stricken with AD will rise from 4 million today to as many as 16 million by midcentury. This means that more people in the United States will have AD by 2050 than the entire current population of the Netherlands. Globally, AD prevalence is expected to rise to 45 million by 2050, with three of every four patients with AD living in a developing nation. The US economic toll is currently $80–$100 billion, but by 2050 more than $1 trillion will be spent annually on AD and related dementias. The impact of this single disease will be catastrophic, and this is just one example."[41]

SENS meetings

There have been four SENS roundtables and six SENS conferences held.[42][43] The first SENS roundtable was held in Oakland, California on October, 2000,[44] and the last SENS roundtable was held in Bethesda, Maryland on July, 2004.[45]

On March 30–31, 2007 a North American SENS symposium was held in Edmonton, Alberta, Canada as the Edmonton Aging Symposium.[46][47] Another SENS-related conference ("Understanding Aging") was held at UCLA in Los Angeles, California on June 27–29, 2008[48]

Six SENS conferences have been held at Queens' College, Cambridge in England. All the conferences were organized by de Grey and all featured world-class researchers in the field of biogerontology.
  • The first SENS conference was held in September 2003 as the 10th Congress of the International Association of Biomedical Gerontology[49] with the proceedings published in the Annals of the New York Academy of Sciences.[50]
  • The second SENS conference was held in September 2005 and was simply called Strategies for Engineered Negligible Senescence (SENS), Second Conference[51] with the proceedings published in Rejuvenation Research.
  • The third SENS conference was held in September, 2007.[52]
  • The fourth SENS conference was held September 3–7, 2009.
  • The fifth was held August 31 – September 4, 2011, like the first four, it was at Queens' College, Cambridge in England, organized by de Grey.[53][54] Videos of the presentations are available.
  • The sixth SENS conference (SENS6) was held from September 3–7, 2013.
Another meeting was held in August 2014 in Santa Clara, California.[43]

SENS Research Foundation

The SENS Research Foundation is a non-profit organization co-founded by Michael Kope, Aubrey de Grey, Jeff Hall, Sarah Marr and Kevin Perrott, which is based in California, United States. Its activities include SENS-based research programs and public relations work for the acceptance of and interest in related research.
Before March 2009, the SENS research programme was mainly pursued by the Methuselah Foundation, co-founded by Aubrey de Grey and David Gobel. The Methuselah Foundation is most notable for establishing the Methuselah Mouse Prize, a monetary prize awarded to researchers who extend the lifespan of mice to unprecedented lengths.[55]

For 2013, The SENS Research Foundation has a research budget of approximately $4 million annually, half of it funded by a personal contribution of $13 million of Aubrey de Grey's[56] own wealth, and the other half coming from external donors, with the largest external donor being Peter Thiel, and another Internet entrepreneur Jason Hope,[57] has recently begun to contribute comparable sums.

Physician

From Wikipedia, the free encyclopedia

Physician
The Doctor Luke Fildes crop.jpg
The Doctor by Luke Fildes (detail)[1][2]
Occupation
NamesPhysician, medical practitioner, medical doctor or simply doctor
Professional
Activity sectors
Medicine, health care
Description
CompetenciesThe ethics, art and science of medicine, analytical skills, critical thinking
Education required
MBBS, MD, DO
Fields of
employment
Clinics, hospitals
Related jobs
General practitioner
Family physician
Surgeon
Medical specialties
Dentist
Chiropractor


A physician, medical practitioner, medical doctor, or simply doctor is a professional who practises medicine, which is concerned with promoting, maintaining, or restoring health through the study, diagnosis, and treatment of disease, injury, and other physical and mental impairments. Physicians may focus their practice on certain disease categories, types of patients and methods of treatment—known as specialities—or they may assume responsibility for the provision of continuing and comprehensive medical care to individuals, families, and communities—known as general practice. Medical practice properly requires both a detailed knowledge of the academic disciplines (such as anatomy and physiology) underlying diseases and their treatment—the science of medicine—and also a decent competence in its applied practice—the art or craft of medicine.

Both the role of the physician and the meaning of the word itself vary around the world. Degrees and other qualifications vary widely, but there are some common elements, such as medical ethics requiring that physicians show consideration, compassion, and benevolence for their patients.

Modern meanings

The Italian Francesco Redi, considered to be the founder of experimental biology, he was the first to recognize and correctly describe details of many important parasites.[4]

Specialist in internal medicine

Around the world the term physician refers to a specialist in internal medicine or one of its many sub-specialties (especially as opposed to a specialist in surgery). This meaning of physician conveys a sense of expertise in treatment by drugs or medications, rather than by the procedures of surgeons.[5]

This term is at least nine hundred years old in English: physicians and surgeons were once members of separate professions, and traditionally were rivals. The Shorter Oxford English Dictionary, third edition, gives a Middle English quotation making this contrast, from as early as 1400: "O Lord, whi is it so greet difference betwixe a cirugian and a physician."[6]

Henry VIII granted a charter to the London Royal College of Physicians in 1518. It was not until 1540 that he granted the Company of Barber-Surgeons (ancestor of the Royal College of Surgeons) its separate charter. In the same year, the English monarch established the Regius Professorship of Physic at the University of Cambridge.[7] Newer universities would probably describe such an academic as a professor of internal medicine. Hence, in the 16th century, physic meant roughly what internal medicine does now.

Currently, a specialist physician in the United States may be described as an internist. Another term, hospitalist, was introduced in 1996,[8] to describe US specialists in internal medicine who work largely or exclusively in hospitals. Such 'hospitalists' now make up about 19% of all US general internists,[9] who are often called general physicians in Commonwealth countries.

This original use, as distinct from surgeon, is common in most of the world including the United Kingdom and other Commonwealth countries (such as Australia, Bangladesh, India, New Zealand, Pakistan, South Africa, Sri Lanka, Zimbabwe), as well as in places as diverse as Brazil, Hong Kong, Indonesia, Japan, Ireland, and Taiwan. In such places, the more general English terms doctor or medical practitioner are prevalent, describing any practitioner of medicine (whom an American would likely call a physician, in the broad sense).[10] In Commonwealth countries, specialist pediatricians and geriatricians are also described as specialist physicians who have sub-specialized by age of patient rather than by organ system.

Physician and surgeon

Around the world, the combined term "physician and surgeon" is used to describe either a general practitioner or any medical practitioner irrespective of specialty.[5][6] This usage still shows the original meaning of physician and preserves the old difference between a physician, as a practitioner of physic, and a surgeon. The term may be used by state medical boards in the United States of America, and by equivalent bodies in provinces of Canada, to describe any medical practitioner.

North America

Elizabeth Blackwell, the first female physician to receive a medical degree in the United States

In modern English, the term physician is used in two main ways, with relatively broad and narrow meanings respectively. This is the result of history and is often confusing. These meanings and variations are explained below.

In the United States and Canada, the term physician describes all medical practitioners holding a professional medical degree. The American Medical Association, established in 1847, as well as the American Osteopathic Association, founded in 1897, both currently use the term physician to describe members. However, the American College of Physicians, established in 1915, does not: its title uses physician in its original sense.

American physicians

The vast majority of physicians trained in the United States have a Doctor of Medicine degree, and use the initials M.D. A smaller number attend Osteopathic schools and have a Doctor of Osteopathic Medicine degree and use the initials D.O.[11] After completion of medical school, physicians complete a residency in the specialty in which they will practice. Subspecialties require the completion of a fellowship after residency.

All boards of certification now require that physicians demonstrate, by examination, continuing mastery of the core knowledge and skills for a chosen specialty. Recertification varies by particular specialty between every seven and every ten years.

Podiatric physicians

Also in the United States, the American Podiatric Medical Association (APMA) defines podiatrists as physicians and surgeons that fall under the department of surgery in hospitals.[12] They undergo training with the Doctor of Podiatric Medicine (DPM) degree.[13] This degree is also available at one Canadian university, namely the Université du Québec à Trois-Rivières. Students are typically required to complete an internship in New York prior to the obtention of their professional degree.

Shortage

Many countries in the developing world have the problem of too few physicians.[14] A shortage of doctors can lead to diseases spreading out of control as seen in the Ebola virus epidemic in West Africa. In 2015, the Association of American Medical Colleges warned that the US will face a doctor shortage of as many as 90,000 by 2025.[15]

Social role and world view

Biomedicine

Within Western culture and over recent centuries, medicine has become increasingly based on scientific reductionism and materialism. This style of medicine is now dominant throughout the industrialized world, and is often termed biomedicine by medical anthropologists.[16] Biomedicine "formulates the human body and disease in a culturally distinctive pattern",[17] and is a world view learnt by medical students. Within this tradition, the medical model is a term for the complete "set of procedures in which all doctors are trained" (R. D. Laing, 1972),[18] including mental attitudes. A particularly clear expression of this world view, currently dominant among conventional physicians, is evidence-based medicine. Within conventional medicine, most physicians still pay heed to their ancient traditions:
The critical sense and sceptical attitude of the citation of medicine from the shackles of priestcraft and of caste; secondly, the conception of medicine as an art based on accurate observation, and as a science, an integral part of the science of man and of nature; thirdly, the high moral ideals, expressed in that most "memorable of human documents" (Gomperz), the Hippocratic oath; and fourthly, the conception and realization of medicine as the profession of a cultivated gentleman.

Sir William Osler, Chauvanism in Medicine (1902)[19]
In this Western tradition, physicians are considered to be members of a learned profession, and enjoy high social status, often combined with expectations of a high and stable income and job security. However, medical practitioners often work long and inflexible hours, with shifts at unsociable times. Their high status is partly from their extensive training requirements, and also because of their occupation's special ethical and legal duties. The term traditionally used by physicians to describe a person seeking their help is the word patient (although one who visits a physician for a routine check-up may also be so described). This word patient is an ancient reminder of medical duty, as it originally meant 'one who suffers'. The English noun comes from the Latin word patiens, the present participle of the deponent verb, patior, meaning 'I am suffering,' and akin to the Greek verb πάσχειν (= paskhein, to suffer) and its cognate noun πάθος (= pathos).[6][20]

Physicians in the original, narrow sense (specialist physicians or internists, see above) are commonly members or fellows of professional organizations, such as the American College of Physicians or the Royal College of Physicians in the United Kingdom, and such hard-won membership is itself a mark of status.[citation needed]

Alternative medicine

While contemporary biomedicine has distanced itself from its ancient roots in religion and magic, many forms of traditional medicine[21] and alternative medicine continue to espouse vitalism in various guises: 'As long as life had its own secret properties, it was possible to have sciences and medicines based on those properties' (Grossinger 1980).[22] The US National Center for Complementary and Alternative Medicine (NCCAM) classifies CAM therapies into five categories or domains, including:[23] alternative medical systems, or complete systems of therapy and practice; mind-body interventions, or techniques designed to facilitate the mind's effect on bodily functions and symptoms; biologically based systems including herbalism; and manipulative and body-based methods such as chiropractic and massage therapy.

In considering these alternate traditions that differ from biomedicine (see above), medical anthropologists emphasize that all ways of thinking about health and disease have a significant cultural content, including conventional western medicine.

Ayurveda, Unani medicine and homeopathy are popular types of alternative medicine. They are included in national system of medicines in countries such as India. In general, the practitioners of these medicine in these countries are referred to as Ved, Hakim and homeopathic doctor/homeopath/homeopathic physician, respectively.

Physicians' own health

Some commentators have argued that physicians have duties to serve as role models for the general public in matters of health, for example by not smoking cigarettes.[26] Indeed, in most western nations relatively few physicians smoke, and their professional knowledge does appear to have a beneficial effect on their health and lifestyle. According to a study of male physicians,[27] life expectancy is slightly higher for physicians (73.0 years for white and 68.7 for black) than lawyers or many other highly educated professionals. Causes of death less likely in physicians than the general population include respiratory disease (including pneumonia, pneumoconioses, COPD, but excluding emphysema and other chronic airway obstruction), alcohol-related deaths, rectosigmoidal and anal cancers, and bacterial diseases.[27]

Physicians do experience exposure to occupational hazards, and there is a well-known aphorism that "doctors make the worst patients".[28] Causes of death that are shown to be higher in the physician population include suicide among doctors and self-inflicted injury, drug-related causes, traffic accidents, and cerebrovascular and ischaemic heart disease.[27]

Education and training

Medical education and career pathways for doctors vary considerably across the world.

All medical practitioners

In all developed countries, entry-level medical education programs are tertiary-level courses, undertaken at a medical school attached to a university. Depending on jurisdiction and university, entry may follow directly from secondary school or require pre-requisite undergraduate education. The former commonly takes five or six years to complete. Programs that require previous undergraduate education (typically a three- or four-year degree, often in Science) are usually four or five years in length. Hence, gaining a basic medical degree may typically take from five to eight years, depending on jurisdiction and university.

Following completion of entry-level training, newly graduated medical practitioners are often required to undertake a period of supervised practice before full registration is granted, typically one or two years. This may be referred to as an "internship", as the "foundation" years in the UK, or as "conditional registration". Some jurisdictions, including the United States, require residencies for practice.

Medical practitioners hold a medical degree specific to the university from which they graduated. This degree qualifies the medical practitioner to become licensed or registered under the laws of that particular country, and sometimes of several countries, subject to requirements for internship or conditional registration.

Specialists in internal medicine

Specialty training is begun immediately following completion of entry-level training, or even before. In other jurisdictions, junior medical doctors must undertake generalist (un-streamed) training for one or more years before commencing specialization. Hence, depending on jurisdiction, a specialist physician (internist) often does not achieve recognition as a specialist until twelve or more years after commencing basic medical training—five to eight years at university to obtain a basic medical qualification, and up to another nine years to become a specialist.

Regulation

In most jurisdictions, physicians (in either sense of the word) need government permission to practice. Such permission is intended to promote public safety, and often to protect the public purse, as medical care is commonly subsidized by national governments.

In some jurisdictions (e.g., Singapore), it is common for physicians to inflate their qualifications with the title "Dr" in correspondence or namecards, even if their qualifications are limited to a basic (e.g., bachelor level) degree. In other countries (e.g., Germany), only physicians holding an academic doctorate may call themselves doctor – on the other hand, the European Research Council has decided that the German medical doctorate does not meet the international standards of a PhD research degree.[29][30]

All medical practitioners

Among the English-speaking countries, this process is known either as licensure as in the United States, or as registration in the United Kingdom, other Commonwealth countries, and Ireland. Synonyms in use elsewhere include colegiación in Spain, ishi menkyo in Japan, autorisasjon in Norway, Approbation in Germany, and "άδεια εργασίας" in Greece. In France, Italy and Portugal, civilian physicians must be members of the Order of Physicians to practice medicine.

In some countries, including the United Kingdom and Ireland, the profession largely regulates itself, with the government affirming the regulating body's authority. The best known example of this is probably the General Medical Council of Britain. In all countries, the regulating authorities will revoke permission to practice in cases of malpractice or serious misconduct.

In the large English-speaking federations (United States, Canada, Australia), the licensing or registration of medical practitioners is done at a state or provincial level or nationally as in New Zealand. Australian states usually have a "Medical Board," which has now been replaced by the Australian Health Practitioner Regulatory Authority (AHPRA) in most states, while Canadian provinces usually have a "College of Physicians and Surgeons." All American states have an agency that is usually called the "Medical Board", although there are alternate names such as "Board of Medicine," "Board of Medical Examiners", "Board of Medical Licensure", "Board of Healing Arts" or some other variation.[31] After graduating from a first-professional school, physicians who wish to practice in the U.S. usually take standardized exams, such as the USMLE for MDs).

Specialists in internal medicine

Most countries have some method of officially recognizing specialist qualifications in all branches of medicine, including internal medicine. Sometimes, this aims to promote public safety by restricting the use of hazardous treatments. Other reasons for regulating specialists may include standardization of recognition for hospital employment and restriction on which practitioners are entitled to receive higher insurance payments for specialist services.

Performance and professionalism supervision

The issue of medical errors, drug abuse, and other issues in physician professional behavior received significant attention across the world,[32] in particular following a critical 2000 report[33] which "arguably launched" the patient-safety movement.[34] In the U.S., as of 2006 there were few organizations that systematically monitored performance. In the U.S. only the Department of Veterans Affairs randomly drug tests, in contrast to drug testing practices for other professions that have a major impact on public welfare. Licensing boards at the U.S. state level depend upon continuing education to maintain competence.[35] Through the utilization of the National Practitioner Data Bank, Federation of State Medical Boards Disciplinary Report, and American Medical Association Physician Profile Service, the 67 State Medical Boards (MD/DO) continually self-report any Adverse/Disciplinary Actions taken against a licensed Physician in order that the other Medical Boards in which the Physician holds or is applying for a medical license will be properly notified so that corrective, reciprocal action can be taken against the offending physician.[36] In Europe, as of 2009 the health systems are governed according to various national laws, and can also vary according to regional differences similar to the United States.[37]

Related occupations and divisions of labor

Chiropractors

Chiropractors use the physician title in some countries. In the United States, practitioners with a Doctor of Chiropractic (DC) have been added to the list of recognized physicians by the Joint Commission on Accreditation of Healthcare Organizations.[38] This change does not affect or alter any health care practitioner’s license or scope of practice.[39] Some medical organizations have criticized the addition of chiropractic to the definition of physician.[39]

In Switzerland, students since 2008 have the option of studying in the University of Zurich medical school earning a Bachelor of Medicine (with a focus on chiropractic) and a Masters in Chiropractic Medicine.[40][41][42] By attending medical school, they become "physicians" in the more traditional sense. Swiss chiropractors have been found to treat conditions in a similar way to their international counterparts while enjoying a greater number of medical specialist referrals.[43]

Nurse practitioners

Nurse practitioners (NPs) in the United States are advanced practice registered nurses holding a post-graduate degree such as a Doctor of Nursing Practice.[44] In Canada, nurse practitioners typically have a Master of nursing degree as well as substantial experience they have accumulated throughout the years. Nurse practitioners are not physicians but may practice alongside physicians in a variety of fields. Nurse practitioners are educated in nursing theory and nursing practice. The scope of practice for a nurse practitioner in the United States is defined by regulatory boards of nursing, as opposed to boards of medicine that regulate physicians.

The Gray Goo Problem

March 20, 2001 by Robert A. Freitas Jr.
Original link:  http://www.kurzweilai.net/the-gray-goo-problem

In Eric Drexler’s classic “grey goo” scenario, out-of-control nanotech replicators wipe out all life on Earth. This paper by Robert A. Freitas Jr. was the first quantitative technical analysis of this catastrophic scenario, also offering possible solutions. It was written in part as an answer to Bill Joy’s recent concerns.

Research Scientist, Zyvex

Originally published April 2000 as “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations.” Excerpted version published on KurzweilAI.net March 20, 2001.

Abstract

The maximum rate of global ecophagy by biovorous self-replicating nanorobots is fundamentally restricted by the replicative strategy employed; by the maximum dispersal velocity of mobile replicators; by operational energy and chemical element requirements; by the homeostatic resistance of biological ecologies to ecophagy; by ecophagic thermal pollution limits (ETPL); and most importantly by our determination and readiness to stop them.

Assuming current and foreseeable energy-dissipative designs requiring ~100 MJ/kg for chemical transformations (most likely for biovorous systems), ecophagy that proceeds slowly enough to add ~4°C to global warming (near the current threshold for immediate climatological detection) will require ~20 months to run to completion; faster ecophagic devices run hotter, allowing quicker detection by policing authorities. All ecophagic scenarios examined appear to permit early detection by vigilant monitoring, thus enabling rapid deployment of effective defensive instrumentalities.

Introduction

Recent discussions [1] of the possible dangers posed by future technologies such as artificial intelligence, genetic engineering and molecular nanotechnology have made it clear that an intensive theoretical analysis of the major classes of environmental risks of molecular nanotechnology (MNT) is warranted. No systematic assessment of the risks and limitations of MNT-based technologies has yet been attempted. This paper represents a first effort to begin this analytical process in a quantitative fashion.

Perhaps the earliest-recognized and best-known danger of molecular nanotechnology is the risk that self-replicating nanorobots capable of functioning autonomously in the natural environment could quickly convert that natural environment (e.g., “biomass”) into replicas of themselves (e.g., “nanomass”) on a global basis, a scenario usually referred to as the “gray goo problem” but perhaps more properly termed “global ecophagy.”

As Drexler first warned in Engines of Creation [2]:
“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous “bacteria” could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop–at least if we make no preparation. We have trouble enough controlling viruses and fruit flies.
Among the cognoscenti of nanotechnology, this threat has become known as the “gray goo problem.” Though masses of uncontrolled replicators need not be gray or gooey, the term “gray goo” emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in an evolutionary sense, but this need not make them valuable.
The gray goo threat makes one thing perfectly clear: We cannot afford certain kinds of accidents with replicating assemblers.

Gray goo would surely be a depressing ending to our human adventure on Earth, far worse than mere fire or ice, and one that could stem from a simple laboratory accident.

Lederberg [3] notes that the microbial world is evolving at a fast pace, and suggests that our survival may depend upon embracing a “more microbial point of view.” The emergence of new infectious agents such as HIV and Ebola demonstrates that we have as yet little knowledge of how natural or technological disruptions to the environment might trigger mutations in known organisms or unknown extant organisms [81], producing a limited form of “green goo” [92].

However, biovorous nanorobots capable of comprehensive ecophagy will not be easy to build and their design will require exquisite attention to numerous complex specifications and operational challenges. Such biovores can emerge only after a lengthy period of purposeful focused effort, or as a result of deliberate experiments aimed at creating general-purpose artificial life, perhaps by employing genetic algorithms, and are highly unlikely to arise solely by accident.

The Ecophagic Threat

Classical molecular nanotechnology [2], 4] envisions nanomachines predominantly composed of carbon-rich diamondoid materials. Other useful nanochemistries might employ aluminum-rich sapphire (Al2O3) materials, boron-rich (BN) or titanium-rich (TiC) materials, and the like. TiC has one the highest possible operating temperatures allowed for commonplace materials (melting point ~3410°K [5]), and while diamond can scratch TiC, TiC can be used to melt diamond.

However, atoms of Al, Ti and B are far more abundant in the Earth’s crust (81,300 ppm, 4400 ppm and 3 ppm, respectively [5]) than in biomass, e.g., the human body (0.1 ppm, 0 ppm, and 0.03 ppm [6]), reducing the direct threat of ecophagy by such systems. On the other hand, carbon is a thousand times less abundant in crustal rocks (320 ppm, mostly carbonates) than in the biosphere (~230,000 ppm).

Furthermore, conversion of the lithosphere into nanomachinery is not a primary concern because ordinary rocks typically contain relatively scarce sources of energy. For instance, natural radioactive isotopes present in crustal rocks vary greatly as a function of the geological composition and history of a region, but generally range from 0.15-1.40 mGy/yr [7], giving a raw power density of 0.28-2.6 ×10-7 W/m3 assuming crustal rocks of approximately mean terrestrial density (5522 kg/m3 [5]).

This is quite insufficient to power nanorobots capable of significant activities; current nanomachine designs typically require power densities on the order of 105-109 W/m3 to achieve effective results [6]. (Biological systems typically operate at 102-106 W/m3 [6].) Solar power is not readily available below the surface, and the mean geothermal heat flow is only 0.05 W/m2 at the surface [6], just a tiny fraction of solar insolation.

Hypothesized crustal abiotic highly-reduced petroleum reserves [16] probably could not energize significant replicator nanomass growth due to the anoxic environment deep underground, although potentially large geobacterial populations have been described [10-16] and in principle some unusual though highly limited bacterial energy sources could also be tapped by nanorobots.

For example, some anaerobic bacteria use metals (instead of oxygen) as electron-acceptors [13], with iron present in minerals such as pyroxene or olivine being converted to iron in a more oxidized state in magnetic minerals such as magnetite and maghemite, and using geochemically produced hydrogen to reduce CO2 to methane [11]. Underground bacteria in the Antrim Shale deposit produce 1.2 ×107 m3/day of natural gas (methane) by consuming the 370 MY-old remains of ancient algae [17].

Bioremediation experiments have also been done by Envirogen and others in which pollution-eating bacteria are purposely injected into the ground to metabolize organic toxins; in field tests it has proven difficult to get the bacteria to move through underground aquifers, because the negatively-charged cells tend to adhere to positively charged iron oxides in the soil [18].

However, the primary ecophagic concern is that runaway nanorobotic replicators or “replibots” will convert the entire surface biosphere (the ecology of all living things on the surface of the Earth) into alternative or artificial materials of some type–especially, materials like themselves, e.g., more self-replicating nanorobots.

Since advanced nanorobots might be constructed predominantly of carbon-rich diamondoid materials [4], and since ~12% of all atoms in the human body (representative of biology generally) are carbon atoms [6], or ~23% by weight, the global biological carbon inventory may support the self-manufacture of a final mass of replicating diamondoid nanorobots on the order of ~0.23 Mbio, where Mbio is the total global biomass.

Unlike almost any other natural material, biomass can serve both as a source of carbon and as a source of power for nanomachine replication. Ecophagic nanorobots would regard living things as environmental carbon accumulators, and biomass as a valuable ore to be mined for carbon and energy. Of course, biosystems from which all carbon has been extracted can no longer be alive but would instead become lifeless chemical sludge.

Additional Scenarios

Four related scenarios which may lead indirectly to global ecophagy have been identified and are described below. In all cases, early detection appears feasible with advance preparation, and adequate defenses are readily conceived using molecular nanotechnologies of comparable sophistication.

Gray Plankton

The existence of 1-2 ×1016 kg [24] of global undersea carbon storage on continental margins as CH4 clathrates and a like amount (3.8 ×1016 kg) of seawater-dissolved carbon as CO2 represent a carbon inventory more than an order of magnitude larger than in the global biomass. Methane and CO2 can in principle be combined to form free carbon and water, plus 0.5 MJ/kg C of free energy. (Some researchers are studying the possibility of reducing greenhouse gas accumulations by storing liquid [44] or solid [45] CO2 on the ocean floor, which could potentially enable seabed replibots to more easily metabolize methane sources.)

Oxygen could also be imported from the surface in pressurized microtanks via buoyancy transport, with the conversion of carbon clathrates to nanomass taking place on the seabed below. The subsequent colonization of the land-based carbon-rich ecology by a large and hungry seabed-grown replicator population is the “gray plankton” scenario. (Phytoplankton, 1-200 microns in size, are the particles most responsible for the variable optical properties of oceanic water because of the strong absorption of these cells in the blue and red portions of the optical spectrum [37].)

If not largely confined to the sea floor during most of their replication cycle, the natural cell/device ratio could increase by many orders of magnitude, requiring a more diligent census effort. Census-taking nanorobots can alternatively be used to identify, disable, knapsack or destroy the gray plankton devices.

Gray Dust (Aerovores)

Traditional diamondoid nanomachinery designs [4] have employed 8 primary chemical elements, along with the associated atmospheric abundances [46] of each element. (Silicon is present in air as particulate dust which may be taken as ~28% Si for crustal rock [5], with a global average dust concentration of ~0.0025 mg/m3). The requirement for elements that are relatively rare in the atmosphere greatly constrains the potential nanomass and growth rate of airborne replicators.

However, note that at least one of the classical designs exceeds 91% CHON by weight. Although it would be very difficult, it is at least theoretically possible that replicators could be constructed almost solely of CHON, in which case such devices could replicate relatively rapidly using only atmospheric resources, powered by sunlight. A worldwide blanket of airborne replicating dust or “aerovores” that blots out all sunlight has been called the “gray dust” scenario [47]. (There have already been numerous experimental aerial releases of recombinant bacteria [48].)

The most efficient cleanup strategy appears to be the use of air-dropped non-self-replicating nanorobots equipped with prehensile microdragnets.

Alternative airborne or ground-based atmospheric filtration configurations that could permit more rapid filtering are readily envisioned. For example, since drag power varies as the square of the velocity, then by increasing mesh volume 10,000-fold while decreasing airflow velocity 100-fold, total drag power remains unchanged but whole-atmosphere turnover proceeds 100-fold faster, e.g., ~15 minutes.

Gray Lichens

Colonies of symbiotic algae and fungi known as lichens (which some have called a form of sub-aerial biofilm) are among the first plants to grow on bare stone, helping in soil formation by slowly etching the rock [55]. Lithobiontic microbial communities such as crustose saxicolous lichens penetrate mineral surfaces up to depths of 1 cm using a complex dissolution, selective transport, and recrystallization process sometimes termed “biological weathering” [56].

Colonies of epilithic (living on rock surfaces) microscopic bacteria produce a 10 micron thick patina on desert rocks (called “desert varnish” [57]) consisting of trace amounts of Mn and Fe oxides that help to provide protection from heat and UV radiation [57-59].

In theory, replicating nanorobots could be made almost entirely of nondiamondoid materials including noncarbon chemical elements found in great abundance in rock such as silicon, aluminum, iron, titanium and oxygen. The subsequent ecophagic destruction of land-based biology by a maliciously programmed noncarbon epilithic replicator population that has grown into a significant nanomass is the “gray lichen” scenario.

Continuous direct census sampling of the Earth’s land surfaces will almost certainly allow early detection, since mineralogical nanorobots should be easily distinguishable from inert rock particles and from organic microbes in the top 3-8 cm of soil.

Malicious Ecophagy

More difficult scenarios involve ecophagic attacks that are launched not to convert biomass to nanomass, but rather primarily to destroy biomass. The optimal malicious ecophagic attack strategy appears to involve a two-phase process.

In the first phase, initial seed replibots are widely distributed in the vicinity of the target biomass, replicating with maximum stealth up to some critical population size by consuming local environmental substrate to build nanomass. In the second phase, the now-large replibot population ceases replication and exclusively undertakes its primary destructive purpose. More generally, this strategy may be described as Build/Destroy.

During the Build phase of the malicious “badbots,” and assuming technological equivalence, defensive “goodbots” enjoy at least three important tactical advantages over their adversaries:

1. Preparation–defensive agencies can manufacture and position in advance overwhelming quantities of (ideally, non-self-replicating) defensive instrumentalities, e.g., goodbots, which can immediately be deployed at the first sign of trouble, with minimal additional risk to the environment;

2. Efficiency–while badbots must simultaneously replicate and defend themselves against attack (either actively or by maintaining stealth), goodbots may concentrate exclusively on attacking badbots (e.g., because of their large numerical superiority in an early deployment) and thus enjoy lower operational overhead and higher efficiency in achieving their purpose, all else equal; and

3. Leverage–in terms of materials, energy, time and sophistication, fewer resources are generally required to confine, disable, or destroy a complex machine than are required to build or replicate the same complex machine from scratch (e.g., one small bomb can destroy a large bomb-making factory; one small missile can sink a large ship).

It is most advantageous to engage a malicious ecophagic threat while it is still in its Build phase. This requires foresight and a commitment to extensive surveillance by the defensive authorities.

Conclusions and Public Policy Recommendations

The smallest plausible biovorous nanoreplicator has a molecular weight of ~1 gigadalton and a minimum replication time of perhaps ~100 seconds, in theory permitting global ecophagy to be completed in as few as ~104 seconds. However, such rapid replication creates an immediately detectable thermal signature enabling effective defensive policing instrumentalities to be promptly deployed before significant damage to the ecology can occur.

Such defensive instrumentalities will generate their own thermal pollution during defensive operations. This should not significantly limit the defense strategy because knapsacking, disabling or destroying a working nanoreplicator should consume far less energy than is consumed by a nanoreplicator during a single replication cycle, hence such defensive operations are effectively endothermic.

Ecophagy that proceeds near the current threshold for immediate climatological detection, adding perhaps ~4°C to global warming, may require ~20 months to run to completion, which is plenty of advance warning to mount an effective defense.

Ecophagy that progresses slowly enough to evade easy detection by thermal monitoring alone would require many years to run to completion, could still be detected by direct in situ surveillance, and may be at least partially offset by increased biomass growth rates due to natural homeostatic compensation mechanisms inherent in the terrestrial ecology.

Ecophagy accomplished indirectly by a replibot population pre-grown on nonbiological substrate may be avoided by diligent thermal monitoring and direct census sampling of relevant terrestrial niches to search for growing, possibly dangerous, pre-ecophagous nanorobot populations.

Specific public policy recommendations suggested by the results of the present analysis include:

1. An immediate international moratorium on all artificial life experiments implemented as nonbiological hardware. In this context, “artificial life” is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines [65] tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue. Alternative “inherently safe” replication strategies such as the broadcast architecture [66] are already well-known.

2. Continuous comprehensive infrared surveillance of Earth’s surface by geostationary satellites, both to monitor the current biomass inventory and to detect (and then investigate) any rapidly-developing artificial hotspots. This could be an extension of current or proposed Earth-monitoring systems (e.g., NASA’s Earth Observing System [67]and disease remote-sensing programs [93]) originally intended to understand and predict global warming, changes in land use, and so forth–initially using non-nanoscale technologies. Other methods of detection are feasible and further research is required to identify and properly evaluate the full range of alternatives.

3. Initiating a long-term research program designed to acquire the knowledge and capability needed to counteract ecophagic replicators, including scenario-building and threat analysis with numerical simulations, measure/countermeasure analysis, theory and design of global monitoring systems capable of fast detection and response, IFF (Identification Friend or Foe) discrimination protocols, and eventually the design of relevant nanorobotic systemic defensive capabilities and infrastructure.

A related long-term recommendation is to initiate a global system of comprehensive in situ ecosphere surveillance, potentially including possible nanorobot activity signatures (e.g. changes in greenhouse gas concentrations), multispectral surface imaging to detect disguised signatures, and direct local nanorobot census sampling on land, sea, and air, as warranted by the pace of development of new MNT capabilities.

Acknowledgments

The author thanks Robert J. Bradbury, J. Storrs Hall, James Logajan, Markus Krummenacker, Thomas McKendree, Ralph C. Merkle, Christopher J. Phoenix, Tihamer Toth-Fejel, James R. Von Ehr II, and Eliezer S. Yudkowsky for helpful comments on earlier versions of this manuscript; J. S. Hall for the word “aerovore”; and R. J. Bradbury for preparing the hypertext version of this document.

References


1. Bill Joy, “Why the future doesn’t need us,” Wired (April 2000); response by Ralph Merkle, “Text of prepared comments by Ralph C. Merkle at the April 1, 2000 Stanford Symposium organized by Douglas Hofstadter“.

2. K. Eric Drexler, “Engines of Creation: The Coming Era of Nanotechnology,” Anchor Press/Doubleday, New York, 1986. See:.

3. Joshua Lederberg, “Infectious History,” Science288(14 April 2000):287-293.

4. K. Eric Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, John Wiley & Sons, NY, 1992.

5. Robert C. Weast, Handbook of Chemistry and Physics, 49th Edition, CRC, Cleveland OH, 1968.

6. Robert A. Freitas Jr., Nanomedicine, Volume I, Landes Bioscience, Georgetown, TX, 1999. See at: http://www. nanomedicine.com.

7. Edward L. Alpen, Radiation Biophysics, Second Edition, Academic Press, New York, 1998.

8. Walter M. Elsasser, “Earth,” Encyclopedia Britannica 7 (1963):845-852.

9. G. Buntebarth, A. Gliko, “Heat Flow in the Earth’s Crust and Mantle,” in A.S. Marfunin, ed., Advanced Mineralogy, Volume 1: Composition, Structure, and Properties of Mineral Matter: Concepts, Results, and Problems, Springer-Verlag, New York, 1994, pp. 430-435.

10. Karsten Pedersen, “The deep subterranean biosphere,” Earth Sci. Rev. 34(1993):243-260.

11. Todd O. Stevens, James P. McKinley, “Lithoautotrophic Microbial Ecosystems in Deep Basalt Aquifers,” Science270(20 October 1995):450-454; see also: G. Jeffrey Taylor, “Life Underground,” PSR Discoveries, 21 December 1996.

12. Stephen Jay Gould, Life’s Grandeur: The Spread of Excellence from Plato to Darwin, Jonathan Cape, 1996.

13. Bill Cabage, “Digging Deeply,” September 1996.

14. James K. Fredrickson, Tullis C. Onstott, “Microbes Deep Inside the Earth,” Sci. Am. 275(October 1996):68-73.

15. Richard Monastersky, “Deep Dwellers: Microbes Thrive Far Below Ground,” Science News151(29 March 1997):192-193.

16. Thomas Gold, The Deep Hot Biosphere, Copernicus Books, 1999; “The deep, hot biosphere,” Proc. Natl. Acad. Sci. 89(1992):6045-6049. See also: P.N. Kropotkin, “Degassing of the Earth and the Origin of Hydrocarbons,” Intl. Geol. Rev. 27(1985):1261-1275.

17. Karl Leif Bates, “Michigan’s natural gas fields: Blame it on underground bacteria,” The Detroit News, 12 September 1996.

18. JoAnn Gutin, “Making Bacteria Move,” Princeton Weekly Bulletin, 17 November 1997.

19. Robert A. Freitas Jr., William P. Gilbreath, eds., Advanced Automation for Space Missions, Proceedings of the 1980 NASA/ASEE Summer Study held at the University of Santa Clara, Santa Clara, CA, June 23-August 29, 1980; NASA Conference Publication CP-2255, November 1982.

20. R.K. Dixon, S. Brown, R.A. Houghton, A.M. Solomon, M.C. Trexler, J. Wisniewski, “Carbon Pools and Flux of Global Forest Ecosystems,” Science263(14 January 1994):185-190.

21. Christopher B. Field, Michael J. Behrenfeld, James T. Randerson, Paul Falkowski, “Primary Production of the Biosphere: Integrating Terrestrial and Oceanic Components,” Science 281(10 July 1998):237-240.

22. Peter M. Vitousek, Harold A. Mooney, Jane Lubchenco, Jerry M. Melillo, “Human Domination of Earth’s Ecosystems,” Science277(25 July 1997):494-499.

23. Colin J. Campbell, Jean H. Laherrere, “The End of Cheap Oil,” Scientific American 278(March 1998):78-83; Robert G. Riley Enterprises, “World Petroleum Reserves,” 1999; L.F. Ivanhoe, “Future world oil supplies: There is a finite limit,” World Oil, October 1995.

24. James P. Kennett, Kevin G. Cannariato, Ingrid L. Hendy, Richard J. Behl, “Carbon Isotopic Evidence for Methane Hydrate Instability During Quaternary Interstadials,” Science 288(7 April 2000):128-133.

25. World Coal Institute, “Coal–Power for Progress,” Third Edition, January 1999, Statistics Canada, “World Coal Reserves,” 1996; “U.S. Coal Reserves: 1997 Update,” February 1999, Energy Information Administration, Washington, DC.

26. F.J. Millero, “Thermodynamics of the carbon dioxide system in the oceans,” Geochim. Cosmochim. Acta59(1995):661-677; see also F.J. Millero, “Carbon Dioxide in the South Pacific“.

27. Michael T. Madigan, John M. Martinko, Jack Parker, eds., Brock’s Biology of Microorganisms, 9th Edition, Prentice-Hall, NJ, 1999; Kenneth J. Ryan, ed., Sherris Medical Microbiology: An Introduction to Infectious Diseases, 3rd Edition, McGraw-Hill, New York, 1994.

28. ORNL, “Major World Ecosystem Complexes Ranked by Carbon in Live Vegetation,” April 1997.

29. J.H. Martin, The IronEx Group, “Testing the iron hypothesis in the ecosystems of the equatorial Pacific Ocean,” Nature 371(1994):123-129; Sallie W. Chisholm, “The iron hypothesis: Basic research meets environmental policy,” Rev. Geophys. 33(1995):Supplement. See also: “Extra iron makes blue deserts bloom,” New Scientist 152(12 October 1996).

30. Richard W. Hughes, Ruby & Sapphire, RWH Publishing, Boulder CO, 1997.

31. F. Albert Cotton, Geoffrey Wilkinson, Advanced Inorganic Chemistry: A Comprehensive Text, Second Edition, John Wiley & Sons, New York, 1966.

32. Ralph C. Merkle, personal communication, 22 March 2000.

33. P.G. Jarvis, Tree Physiol.2(1986):347-.

34. Oliver L. Phillips et al, “Changes in the Carbon Balance of Tropical Forests: Evidence from Long-Term Plots,” Science282(16 October 1998):439-442.

35. S. Fan, M. Gloor, J. Mahlman, S. Pacala, J. Sarmiento, T. Takahashi, P. Tans, “A Large Terrestrial Carbon Sink in North America Implied by Atmospheric and Oceanic Carbon Dioxide Data and Models,” Science 282(16 October 1998):442-446.

36. D. Stramski, D.A. Kiefer, “Light Scattering by Microorganisms in the Open Ocean,” Prog. Oceanogr.28(1991):343.

37. Curtis D. Mobley, “Chapter 43. The Optical Properties of Water,” in Michael Bass, ed., Handbook of Optics, Volume I, McGraw-Hill, Inc., New York, 1995, pp. 43.3-43.56.

38. Neil A. Campbell, Jane B. Reece, Lawrence G. Mitchell, Biology–Interactive Study Guide, Benjamin/Cummings Science, San Francisco, CA, 1999. See also: Paul Broady, “BIOL 113–Diversity of Life,” lecture notes.

39. William B. Whitman, David C. Coleman, “Prokaryotes: the unseen majority,” Proc. Natl. Acad. Sci. (USA) 94(June 1998):6578-6583.

40. B.R. Strain, J.D. Cure, eds., Direct Effects of Increasing Carbon Dioxide on Vegetation, Publ. ER-0238, U.S. Department of Energy, Washington, DC, 1985; R.J. Luxmoore, R.J. Norby, E.G. O’Neill, in Forest Plants and Forest Protection, 18th Intl. Union of Forestry Research Organizations (IUFRO), World Congress, Div. 2, 1987, IUFRO Secretariate, Vienna, 1987, Vol. 1, pp. 178-183; P.S. Curtis, B.G. Drake, P.W. Leadley, W.J. Arp, D.F. Whigham, Oecologia 78(1989):20; D. Eamus, P.G. Jarvis, Adv. Ecol. Res. 19(1989):1; P.G. Jarvis, Philos. Trans. R. Soc. London B 324(1989):369; R.J. Norby, E.G. O’Neill, New Phytol.117(1991):515.

41. Christian Korner, John A. Arnone III, “Responses to Elevated Carbon Dioxide in Artificial Tropical Ecosystems,” Science257(18 September 1992):1672-1675.

42. Eric T. Sundquist, “The Global Carbon Dioxide Budget,” Science 259(12 February 1993):934-941.

43. Hubertus Fischer, Martin Wahlen, Jesse Smith, Derek Mastroianni, Druce Deck, “Ice Core Records of Atmospheric CO2 Around the Last Three Glacial Terminations,” Science 283(12 March 1999):1712-1714.

44. Peter G. Brewer, Gernot Friederich, Edward T. Peltzer, Franklin M. Orr Jr., “Direct Experiments on the Ocean Disposal of Fossil Fuel CO2,” Science 284(7 May 1999):943-945; “Ocean studied for carbon dioxide storage,” 10 May 1999.

45. C.N. Murray, L. Visintini, G. Bidoglio, B. Henry, “Permanent Storage of Carbon Dioxide in the Marine Environment: The Solid CO2 Penetrator,” Energy Convers. Mgmt.37(1996):1067-1072.

46. Dennis K. Killinger, James H. Churnside, Laurence S. Rothman, “Chapter 14. Atmospheric Optics,” in Michael Bass, Eric W. Van Stryland, David R. Williams, William L. Wolfe, eds., Handbook of Optics, Volume I: Fundamentals, Techniques, and Design, Second Edition, McGraw-Hill, Inc., New York, 1995, pp. 44.1-44.50.

47. Ralph C. Merkle, personal communication, 6 April 2000.

48. Guy R. Knudsen, Louise-Marie C. Dandurand, “Model for Dispersal and Epiphytic Survival of Bacteria Applied to Crop Foliage,” paper presented at the 7th Symposium on Environmental Releases of Biotechnology Products: Risk Assessment Methods and Research Progress, 6-8 June 1995, Pensacola, FL.

49. Jake Page, “Making the Chips that Run the World,” Smithsonian 30(January 2000):36-46.

50. A. Borghesi, G. Guizzetti, “Graphite (C),” in Edward D. Palik, ed., Handbook of Optical Constants of Solids II, Academic Press, New York, 1991, pp. 449-460.

51. B. Ranby, J.F. Rabek, Photodegradation, Photo-oxidation and Photostabilization of Polymers, John Wiley & Sons, New York, 1975.

52. William S. Spector, ed., Handbook of Biological Data, W.B. Saunders Company, Philadelphia PA, 1956.

53. W.J. Kowalski, William Bahnfleth, “Airborne Respiratory Diseases and Mechanical Systems for Control of Microbes,” HPAC (July 1998).

54. M. Edmund Speare, Wayne Anthony McCurdy, Allan Grierson, “Coal and Coal Mining,” Encyclopedia Britannica5(1963):961-975; Helmut E. Landsberg, “Dust,” Encyclopedia Britannica7(1963):787-791; and Gerrit Willem Hendrik Schepers, “Pneumonoconiosis,” Encyclopedia Britannica 18(1963):99-100.

55. T.H. Nash, Lichen Biology, Cambridge University Press, Cambridge, 1996.

56. W.W. Barker, J.F. Banfield, “Biologically- versus inorganically-mediated weathering: relationships between minerals and extracellular polysaccharides in lithobiontic communities,” Chemical Geology132(1996):55-69; J.F. Banfield, W.W. Barker, S.A. Welch, A. Taunton, “Biological impact on mineral dissolution: Application of the lichen model to understanding mineral weathering in the rhizosphere,” Proc. Nat. Acad. Sci. (USA) 96(1999):3404-3411. See also: W.W. Barker, “Interactions between silicate minerals and lithobiontic microbial communities (lichens),”.

57. Ronald L. Dorn, Theodore M. Oberlander, “Microbial Origin of Desert Varnish,” Science 213(1981):1245-1247; R.L. Dorn, “Rock varnish,” Amer. Sci. 79(1991):542-553.

58. W.W. Barker, S.A. Welch, S. Chu, J.F. Banfield, “Experimental observations of the effects of bacteria on aluminosilicate weathering,” Amer. Mineral.83(1998):1551-1563.

59. S.A. Welch, W.W. Barker, J.F. Banfield, “Microbial extracellular polysaccharides and plagioclase dissolution,” Geochim. Cosmochim. Acta 63(1999):1405-1419.

60. K.L. Temple, A.R. Colmer, “The autotrophic oxidation of iron by a new bacterium, Thiobacillus ferrooxidans,” J. Bacteriol. 62(1951):605-611.

61. P.A. Trudinger, “Microbes, Metals, and Minerals,” Minerals Sci. Eng. 3(1971):13-25; C.L. Brierley, “Bacterial Leaching,” CRC Crit. Rev. Microbiol. 6(1978):207-262; “Microbiological mining,” Sci. Am. 247(February 1982):44-53.

62. A. Okereke, S.E. Stevens, “Kinetics of iron oxidation by Thiobacillus ferrooxidans,” Appl. Environ. Microbiol. 57(1991):1052-1056.

63. Verena Peters, Peter H. Janssen, Ralf Conrad, “Transient Production of Formate During Chemolithotrophic Growth of Anaerobic Microorganisms on Hydrogen,” Curr. Microbiol. 38(1999):285-289.

64. Mark S. Coyne, “Lecture 24–Biogeochemical Cycling: Soil Mineral Transformations of Metals,” Agripedia: Introductory Soil Biology; “Lecture 3–Soil as a Microbial Habitat: Microbial Distribution,” Agripedia: Introductory Soil Biology.

65. “NIH Guidelines for Research Involving Recombinant DNA Molecules,” January 1996 revision.

66. Ralph C. Merkle, “Self-replicating systems and low cost manufacturing,” in M.E. Welland, J.K. Gimzewski, eds., The Ultimate Limits of Fabrication and Measurement, Kluwer, Dordrecht, 1994, pp. 25-32.

67. “Links to Earth Observing System (EOS) Data and Information.”


69. World Resources Institute, World Resources 1988-89, Basic Books, Inc., New York, 1988, p. 169; EPA, Federal Register 61(13 December 1996):657-63.

70. Sankar Chatterjee, The Rise of Birds: 225 Million Years of Evolution, Johns Hopkins University Press, Baltimore, MD, 1997.

71. Paul R. Ehrlich, David S. Dobkin, Darryl Wheye, “Adaptations for Flight,” 1988.

72. H. J. Morowitz, M. E. Tourtellotte, “The Smallest Living Cells,” Sci. Am. 206(March 1962):117-126; H.J. Morowitz, Prog. Theoret. Biol. 1(1967):1.

73. A. R. Mushegian, E. V. Koonin, “A minimal gene set for cellular life derived by comparison of complete bacterial genomes,” Proc. Natl. Acad. Sci. (USA) 93(17 September 1996):10268-10273.

74. R. Himmelreich, H. Hilbert, H. Plagens, E. Pirkl, B.C. Li, R. Herrmann, “Complete sequence analysis of the genome of the bacterium Mycoplasma pneumoniae,” Nucleic Acids Res. 24(15 November 1996):4420-4449.

75. C. B. Williams, Patterns in the Balance of Nature and Related Problems in Quantitative Ecology, Academic Press, London, 1964.

76. C. W. Sabrosky, “How many insects are there?” in Insects, The Yearbook of Agriculture, U.S. Department of Agriculture, Washington, DC, 1952.

77. “Numbers of Insects (Species and Individuals),” Department of Entomology, National Museum of Natural History.

78. Nelson Thompson, “Biology/Entomology 173. Insect Physiology, Spring 1998, Lecture 17: Respiration,” 6 November 1997; “Some biological problems involving diffusion.”

79. J. Storrs Hall, personal communication, 6 May 2000.

80. U.S. Bureau of the Census, Statistical Abstract of the United States: 1996, 116th Edition, Washington, DC, October 1996.

81. “…there are dozens of HIV-like viruses in wild monkey populations, and if natural transfer of AIDS viruses from chimpanzees to monkeys has already occurred, there is no reason why it should not happen again.” Beatrice Hahn, Howard Hughes Medical Institute scientist, quoted in: Declan Butler, “Analysis of polio vaccine could end dispute over how AIDS originated,” Nature 404(2 March 2000):9.

82. “Recycled Tires for a Building System,” 1999; “Annual Form 10-KSB Report,” The Quantum Group, Inc., 31 December 1998; “Return Trip: How To Recycle the Family Car,” 1994.

83. “Solar Radiation Data Manual for Flat-Plate and Concentrating Collectors: 30-Year Average of Monthly Solar Radiation, 1961-1990, Spreadsheet Portable Data Files,” DOE Renewable Resource Data Center.

84. George M. Hidy, The Winds: The Origins and Behavior of Atmospheric Motion, D. Van Nostrand Company, Princeton, NJ, 1967.

85. Evan R.C. Reynolds, Frank B. Thompson, eds., Forests, Climate, and Hydrology: Regional Impacts, United Nations University Press, Tokyo, Japan, 1988; see: “Effect of surface cover on land surface processes.”


87. PSUBAMS Model, “Dual roughness regimes,” April 1997.

88. Horace Robert Byers, Synoptic and Aeronautical Meteorology, McGraw-Hill Book Company, New York, 1937.


90. Joseph Morgan, Introduction to University Physics, Volume One, Allyn and Bacon, Inc., Boston, MA, 1963.

91. Reporting on Climate Change: Understanding the Science.Chapter 3. Greenhouse Gases, Some Basics,” Environmental Health Center, National Safety Council, Washington, DC, November 1994, ISBN 0-87912-177-7.

92. Robert J. Bradbury, personal communication, 8 May 2000.

93. B. Lobitz, L. Beck, A. Huq, B. Wood, G. Fuchs, A.S.G. Faruque, R. Colwell, “Climate and infectious disease: Use of remote sensing for detection of Vibrio cholerae by indirect measurement,” Proc. Natl. Acad. Sci. (USA) 97(2000):1438-1443.

World Wide Web Consortium

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/World_Wide_Web_Consortium World Wide We...