Search This Blog

Saturday, April 12, 2025

Naturalism (philosophy)

From Wikipedia, the free encyclopedia
Double rainbow at Yosemite National Park. According to naturalism, the causes of all phenomena are to be found within the universe and not transcendental factors beyond it.

In philosophy, naturalism is the idea that only natural laws and forces (as opposed to supernatural ones) operate in the universe. In its primary sense, it is also known as ontological naturalism, metaphysical naturalism, pure naturalism, philosophical naturalism and antisupernaturalism. "Ontological" refers to ontology, the philosophical study of what exists. Philosophers often treat naturalism as equivalent to materialism, but there are important distinctions between the philosophies.

For example, philosopher Paul Kurtz argued that nature is best accounted for by reference to material principles. These principles include mass, energy, and other physical and chemical properties accepted by the scientific community. Further, this sense of naturalism holds that spirits, deities, and ghosts are not real and that there is no "purpose" in nature. This stronger formulation of naturalism is commonly referred to as metaphysical naturalism. On the other hand, the more moderate view that naturalism should be assumed in one's working methods as the current paradigm, without any further consideration of whether naturalism is true in the robust metaphysical sense, is called methodological naturalism.

With the exception of pantheists – who believe that nature is identical with divinity while not recognizing a distinct personal anthropomorphic god – theists challenge the idea that nature contains all of reality. According to some theists, natural laws may be viewed as secondary causes of God(s).

In the 20th century, Willard Van Orman Quine, George Santayana, and other philosophers argued that the success of naturalism in science meant that scientific methods should also be used in philosophy. According to this view, science and philosophy are not always distinct from one another, but instead form a continuum.

"Naturalism is not so much a special system as a point of view or tendency common to a number of philosophical and religious systems; not so much a well-defined set of positive and negative doctrines as an attitude or spirit pervading and influencing many doctrines. As the name implies, this tendency consists essentially in looking upon nature as the one original and fundamental source of all that exists, and in attempting to explain everything in terms of nature. Either the limits of nature are also the limits of existing reality, or at least the first cause, if its existence is found necessary, has nothing to do with the working of natural agencies. All events, therefore, find their adequate explanation within nature itself. But, as the terms nature and natural are themselves used in more than one sense, the term naturalism is also far from having one fixed meaning".

History

Ancient and medieval philosophy

Naturalism is most notably a Western phenomenon, but an equivalent idea has long existed in the East. Naturalism was the foundation of two out of six orthodox schools and one heterodox school of Hinduism. Samkhya, one of the oldest schools of Indian philosophy puts nature (Prakriti) as the primary cause of the universe, without assuming the existence of a personal God or Ishvara. The Carvaka, Nyaya, Vaisheshika schools originated in the 7th, 6th, and 2nd century BCE, respectively. Similarly, though unnamed and never articulated into a coherent system, one tradition within Confucian philosophy embraced a form of Naturalism dating to the Wang Chong in the 1st century, if not earlier, but it arose independently and had little influence on the development of modern naturalist philosophy or on Eastern or Western culture.

Ancient Roman mosaic showing Anaximander holding a sundial. One of the contributors to naturalism in ancient Greek philosophy

Western metaphysical naturalism originated in ancient Greek philosophy. The earliest pre-Socratic philosophers, especially the Milesians (Thales, Anaximander, and Anaximenes) and the atomists (Leucippus and Democritus), were labeled by their peers and successors "the physikoi" (from the Greek φυσικός or physikos, meaning "natural philosopher" borrowing on the word φύσις or physis, meaning "nature") because they investigated natural causes, often excluding any role for gods in the creation or operation of the world. This eventually led to fully developed systems such as Epicureanism, which sought to explain everything that exists as the product of atoms falling and swerving in a void.

Aristotle surveyed the thought of his predecessors and conceived of nature in a way that charted a middle course between their excesses.

Plato's world of eternal and unchanging Forms, imperfectly represented in matter by a divine Artisan, contrasts sharply with the various mechanistic Weltanschauungen, of which atomism was, by the fourth century at least, the most prominent This debate was to persist throughout the ancient world. Atomistic mechanism got a shot in the arm from Epicurus while the Stoics adopted a divine teleology The choice seems simple: either show how a structured, regular world could arise out of undirected processes, or inject intelligence into the system. This was how Aristotle… when still a young acolyte of Plato, saw matters. Cicero… preserves Aristotle's own cave-image: if troglodytes were brought on a sudden into the upper world, they would immediately suppose it to have been intelligently arranged. But Aristotle grew to abandon this view; although he believes in a divine being, the Prime Mover is not the efficient cause of action in the Universe, and plays no part in constructing or arranging it But, although he rejects the divine Artificer, Aristotle does not resort to a pure mechanism of random forces. Instead he seeks to find a middle way between the two positions, one which relies heavily on the notion of Nature, or phusis.

With the rise and dominance of Christianity in the West and the later spread of Islam, metaphysical naturalism was generally abandoned by intellectuals. Thus, there is little evidence for it in medieval philosophy.

Modern philosophy

It was not until the early modern era of philosophy and the Age of Enlightenment that naturalists like Benedict Spinoza (who put forward a theory of psychophysical parallelism), David Hume, and the proponents of French materialism (notably Denis Diderot, Julien La Mettrie, and Baron d'Holbach) started to emerge again in the 17th and 18th centuries. In this period, some metaphysical naturalists adhered to a distinct doctrine, materialism, which became the dominant category of metaphysical naturalism widely defended until the end of the 19th century.

Thomas Hobbes was a proponent of naturalism in ethics who acknowledged normative truths and properties. Immanuel Kant rejected (reductionist) materialist positions in metaphysics, but he was not hostile to naturalism. His transcendental philosophy is considered to be a form of liberal naturalism.

Hegel who together with Joseph von Schelling developed the form of natural philosophy recognised as Naturphilosophie

In late modern philosophy, Naturphilosophie, a form of natural philosophy, was developed by Friedrich Wilhelm Joseph von Schelling and Georg Wilhelm Friedrich Hegel as an attempt to comprehend nature in its totality and to outline its general theoretical structure.

A version of naturalism that arose after Hegel was Ludwig Feuerbach's anthropological materialism, which influenced Karl Marx and Friedrich Engels's historical materialism, Engels's "materialist dialectic" philosophy of nature (Dialectics of Nature), and their follower Georgi Plekhanov's dialectical materialism.

Another notable school of late modern philosophy advocating naturalism was German materialism: members included Ludwig Büchner, Jacob Moleschott, and Carl Vogt.

The current usage of the term naturalism "derives from debates in America in the first half of the 20th century. The self-proclaimed 'naturalists' from that period included John Dewey, Ernest Nagel, Sidney Hook, and Roy Wood Sellars."

Contemporary philosophy

A politicized version of naturalism that has arisen in contemporary philosophy is Ayn Rand's Objectivism. Objectivism is an expression of capitalist ethical idealism within a naturalistic framework. An example of a more progressive naturalistic philosophy is secular humanism.

The current usage of the term naturalism derives from debates in America in the first half of the last century.

Currently, metaphysical naturalism is more widely embraced than in previous centuries, especially but not exclusively in the natural sciences and the Anglo-American, analytic philosophical communities. While the vast majority of the population of the world remains firmly committed to non-naturalistic worldviews, contemporary defenders of naturalism and/or naturalistic theses and doctrines today include Graham Oppy, Kai Nielsen, J. J. C. Smart, David Malet Armstrong, David Papineau, Paul Kurtz, Brian Leiter, Daniel Dennett, Michael Devitt, Fred Dretske, Paul and Patricia Churchland, Mario Bunge, Jonathan Schaffer, Hilary Kornblith, Leonard Olson, Quentin Smith, Paul Draper and Michael Martin, among many other academic philosophers.

According to David Papineau, contemporary naturalism is a consequence of the build-up of scientific evidence during the twentieth century for the "causal closure of the physical", the doctrine that all physical effects can be accounted for by physical causes.

By the middle of the twentieth century, the acceptance of the causal closure of the physical realm led to even stronger naturalist views. The causal closure thesis implies that any mental and biological causes must themselves be physically constituted, if they are to produce physical effects. It thus gives rise to a particularly strong form of ontological naturalism, namely the physicalist doctrine that any state that has physical effects must itself be physical. From the 1950s onwards, philosophers began to formulate arguments for ontological physicalism. Some of these arguments appealed explicitly to the causal closure of the physical realm (Feigl 1958, Oppenheim and Putnam 1958). In other cases, the reliance on causal closure lay below the surface. However, it is not hard to see that even in these latter cases the causal closure thesis played a crucial role.

In contemporary continental philosophy, Quentin Meillassoux proposed speculative materialism, a post-Kantian return to David Hume which can strengthen classical materialist ideas. This speculative approach to philosophical naturalism has been further developed by other contemporary thinkers including Ray Brassier and Drew M. Dalton.

Etymology

The term "methodological naturalism" is much more recent, though. According to Ronald Numbers, it was coined in 1983 by Paul de Vries, a Wheaton College philosopher. De Vries distinguished between what he called "methodological naturalism", a disciplinary method that says nothing about God's existence, and "metaphysical naturalism", which "denies the existence of a transcendent God". The term "methodological naturalism" had been used in 1937 by Edgar S. Brightman in an article in The Philosophical Review as a contrast to "naturalism" in general, but there the idea was not really developed to its more recent distinctions.

Description

Flammarion engraving A 21st century image of the universe and a 1888 illustration of the cosmos.

According to Steven Schafersman, naturalism is a philosophy that maintains that;

  1. "Nature encompasses all that exists throughout space and time;
  2. Nature (the universe or cosmos) consists only of natural elements, that is, of spatio-temporal physical substance – massenergy. Non-physical or quasi-physical substance, such as information, ideas, values, logic, mathematics, intellect, and other emergent phenomena, either supervene upon the physical or can be reduced to a physical account;
  3. Nature operates by the laws of physics and in principle, can be explained and understood by science and philosophy;
  4. The supernatural does not exist, i.e., only nature is real. Naturalism is therefore a metaphysical philosophy opposed primarily by supernaturalism".

Or, as Carl Sagan succinctly put it: "The Cosmos is all that is or ever was or ever will be."

In addition Arthur C. Danto states that naturalism, in recent usage, is a species of philosophical monism according to which whatever exists or happens is natural in the sense of being susceptible to explanation through methods which, although paradigmatically exemplified in the natural sciences, are continuous from domain to domain of objects and events. Hence, naturalism is polemically defined as repudiating the view that there exists or could exist any entities which lie, in principle, beyond the scope of scientific explanation.

Arthur Newell Strahler states: "The naturalistic view is that the particular universe we observe came into existence and has operated through all time and in all its parts without the impetus or guidance of any supernatural agency." "The great majority of contemporary philosophers urge that that reality is exhausted by nature, containing nothing 'supernatural', and that the scientific method should be used to investigate all areas of reality, including the 'human spirit'." Philosophers widely regard naturalism as a "positive" term, and "few active philosophers nowadays are happy to announce themselves as 'non-naturalists'". "Philosophers concerned with religion tend to be less enthusiastic about 'naturalism'" and that despite an "inevitable" divergence due to its popularity, if more narrowly construed, (to the chagrin of John McDowell, David Chalmers and Jennifer Hornsby, for example), those not so disqualified remain nonetheless content "to set the bar for 'naturalism' higher."

Alvin Plantinga stated that Naturalism is presumed to not be a religion. However, in one very important respect it resembles religion by performing the cognitive function of a religion. There is a set of deep human questions to which a religion typically provides an answer. In like manner naturalism gives a set of answers to these questions".

Providing assumptions required for science

According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit.

Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method:

  1. That there is an objective reality shared by all rational observers. "The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else."
  2. That this objective reality is governed by natural laws;
    "Science, at least today, assumes that the universe obeys knowable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible."
  3. That reality can be discovered by means of systematic observation and experimentation.
    Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding."
  4. That Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.
    Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it. "The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different."
  5. That experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results.
  6. That experimenters won't be significantly biased by their presumptions.
  7. That random sampling is representative of the entire population.
    A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions.

Methodological naturalism

Aristotle, one of the philosophers behind the modern day scientific method used as a central term in methodological naturalism

Methodological naturalism, the second sense of the term "naturalism", (see above) is "the adoption or assumption of philosophical naturalism … with or without fully accepting or believing it.” Robert T. Pennock used the term to clarify that the scientific method confines itself to natural explanations without assuming the existence or non-existence of the supernatural. "We may therefore be agnostic about the ultimate truth of [philosophical] naturalism, but nevertheless adopt it and investigate nature as if nature is all that there is."

According to Ronald Numbers, the term "methodological naturalism" was coined in 1983 by Paul de Vries, a Wheaton College philosopher.

Both Schafersman and Strahler assert that it is illogical to try to decouple the two senses of naturalism. "While science as a process only requires methodological naturalism, the practice or adoption of methodological naturalism entails a logical and moral belief in philosophical naturalism, so they are not logically decoupled." This “[philosophical] naturalistic view is espoused by science as its fundamental assumption."

But Eugenie Scott finds it imperative to do so for the expediency of deprogramming the religious. "Scientists can defuse some of the opposition to evolution by first recognizing that the vast majority of Americans are believers, and that most Americans want to retain their faith." Scott apparently believes that "individuals can retain religious beliefs and still accept evolution through methodological naturalism. Scientists should therefore avoid mentioning metaphysical naturalism and use methodological naturalism instead." "Even someone who may disagree with my logic … often understands the strategic reasons for separating methodological from philosophical naturalism—if we want more Americans to understand evolution."

Scott’s approach has found success as illustrated in Ecklund’s study where some religious scientists reported that their religious beliefs affect the way they think about the implications – often moral – of their work, but not the way they practice science within methodological naturalism. Papineau notes that "Philosophers concerned with religion tend to be less enthusiastic about metaphysical naturalism and that those not so disqualified remain content "to set the bar for 'naturalism' higher."

In contrast to Schafersman, Strahler, and Scott, Robert T. Pennock, an expert witness at the Kitzmiller v. Dover Area School District trial and cited by the Judge in his Memorandum Opinion. described "methodological naturalism" stating that it is not based on dogmatic metaphysical naturalism.

Pennock further states that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent cannot do. In addition he says: "If we could apply natural knowledge to understand supernatural powers, then, by definition, they would not be supernatural." "Because the supernatural is necessarily a mystery to us, it can provide no grounds on which one can judge scientific models." "Experimentation requires observation and control of the variables.... But by definition we have no control over supernatural entities or forces."

The position that the study of the function of nature is also the study of the origin of nature is in contrast with opponents who take the position that functioning of the cosmos is unrelated to how it originated. While they are open to supernatural fiat in its invention and coming into existence, during scientific study to explain the functioning of the cosmos, they do not appeal to the supernatural. They agree that allowing "science to appeal to untestable supernatural powers to explain how nature functions would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament."

Views on methodological naturalism

W. V. O. Quine

W. V. O. Quine describes naturalism as the position that there is no higher tribunal for truth than natural science itself. In his view, there is no better method than the scientific method for judging the claims of science, and there is neither any need nor any place for a "first philosophy", such as (abstract) metaphysics or epistemology, that could stand behind and justify science or the scientific method.

Therefore, philosophy should feel free to make use of the findings of scientists in its own pursuit, while also feeling free to offer criticism when those claims are ungrounded, confused, or inconsistent. In Quine's view, philosophy is "continuous with" science, and both are empirical. Naturalism is not a dogmatic belief that the modern view of science is entirely correct. Instead, it simply holds that science is the best way to explore the processes of the universe and that those processes are what modern science is striving to understand.

Karl Popper

Karl Popper equated naturalism with inductive theory of science. He rejected it based on his general critique of induction (see problem of induction), yet acknowledged its utility as means for inventing conjectures.

A naturalistic methodology (sometimes called an "inductive theory of science") has its value, no doubt. I reject the naturalistic view: It is uncritical. Its upholders fail to notice that whenever they believe to have discovered a fact, they have only proposed a convention. Hence the convention is liable to turn into a dogma. This criticism of the naturalistic view applies not only to its criterion of meaning, but also to its idea of science, and consequently to its idea of empirical method.

— Karl R. Popper, The Logic of Scientific Discovery, (Routledge, 2002), pp. 52–53, ISBN 0-415-27844-9.

Popper instead proposed that science should adopt a methodology based on falsifiability for demarcation, because no number of experiments can ever prove a theory, but a single experiment can contradict one. Popper holds that scientific theories are characterized by falsifiability.

Alvin Plantinga

Alvin Plantinga, Professor Emeritus of Philosophy at Notre Dame, and a Christian, has become a well-known critic of naturalism. He suggests, in his evolutionary argument against naturalism, that the probability that evolution has produced humans with reliable true beliefs, is low or inscrutable, unless the evolution of humans was guided (for example, by God). According to David Kahan of the University of Glasgow, in order to understand how beliefs are warranted, a justification must be found in the context of supernatural theism, as in Plantinga's epistemology. (See also supernormal stimuli).

Plantinga argues that together, naturalism and evolution provide an insurmountable "defeater for the belief that our cognitive faculties are reliable", i.e., a skeptical argument along the lines of Descartes' evil demon or brain in a vat.

Take philosophical naturalism to be the belief that there aren't any supernatural entities – no such person as God, for example, but also no other supernatural entities, and nothing at all like God. My claim was that naturalism and contemporary evolutionary theory are at serious odds with one another – and this despite the fact that the latter is ordinarily thought to be one of the main pillars supporting the edifice of the former. (Of course I am not attacking the theory of evolution, or anything in that neighborhood; I am instead attacking the conjunction of naturalism with the view that human beings have evolved in that way. I see no similar problems with the conjunction of theism and the idea that human beings have evolved in the way contemporary evolutionary science suggests.) More particularly, I argued that the conjunction of naturalism with the belief that we human beings have evolved in conformity with current evolutionary doctrine is in a certain interesting way self-defeating or self-referentially incoherent.

— Alvin Plantinga, Naturalism Defeated?: Essays on Plantinga's Evolutionary Argument Against Naturalism, "Introduction"

The argument is controversial and has been criticized as seriously flawed, for example, by Elliott Sober.

Robert T. Pennock

Robert T. Pennock states that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent cannot do. He says: "If we could apply natural knowledge to understand supernatural powers, then, by definition, they would not be supernatural." As the supernatural is necessarily a mystery to us, it can provide no grounds on which one can judge scientific models. "Experimentation requires observation and control of the variables.... But by definition we have no control over supernatural entities or forces." Science does not deal with meanings; the closed system of scientific reasoning cannot be used to define itself. Allowing science to appeal to untestable supernatural powers would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and "would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament."

Naturalism of this sort says nothing about the existence or nonexistence of the supernatural, which by this definition is beyond natural testing. As a practical consideration, the rejection of supernatural explanations would merely be pragmatic, thus it would nonetheless be possible for an ontological supernaturalist to espouse and practice methodological naturalism. For example, scientists may believe in God while practicing methodological naturalism in their scientific work. This position does not preclude knowledge that is somehow connected to the supernatural. Generally however, anything that one can examine and explain scientifically would not be supernatural, simply by definition.

Enzymatic biofuel cell

From Wikipedia, the free encyclopedia

An enzymatic biofuel cell is a specific type of fuel cell that uses enzymes as a catalyst to oxidize its fuel, rather than precious metals. Enzymatic biofuel cells, while currently confined to research facilities, are widely prized for the promise they hold in terms of their relatively inexpensive components and fuels, as well as a potential power source for bionic implants.

Operation

A general diagram for an enzymatic biofuel cell using Glucose and Oxygen. The blue area indicates the electrolyte.

Enzymatic biofuel cells work on the same general principles as all fuel cells: use a catalyst to separate electrons from a parent molecule and force it to go around an electrolyte barrier through a wire to generate an electric current. What makes the enzymatic biofuel cell distinct from more conventional fuel cells are the catalysts they use and the fuels that they accept. Whereas most fuel cells use metals like platinum and nickel as catalysts, the enzymatic biofuel cell uses enzymes derived from living cells (although not within living cells; fuel cells that use whole cells to catalyze fuel are called microbial fuel cells). This offers a couple of advantages for enzymatic biofuel cells: Enzymes are relatively easy to mass-produce and so benefit from economies of scale, whereas precious metals must be mined and so have an inelastic supply. Enzymes are also specifically designed to process organic compounds such as sugars and alcohols, which are extremely common in nature. Most organic compounds cannot be used as fuel by fuel cells with metal catalysts because the carbon monoxide formed by the interaction of the carbon molecules with oxygen during the fuel cell's functioning will quickly “poison” the precious metals that the cell relies on, rendering it useless. Because sugars and other biofuels can be grown and harvested on a massive scale, the fuel for enzymatic biofuel cells is extremely cheap and can be found in nearly any part of the world, thus making it an extraordinarily attractive option from a logistics standpoint, and even more so for those concerned with the adoption of renewable energy sources.

Enzymatic biofuel cells also have operating requirements not shared by traditional fuel cells. What is most significant is that the enzymes that allow the fuel cell to operate must be “immobilized” near the anode and cathode in order to work properly; if not immobilized, the enzymes will diffuse into the cell's fuel and most of the liberated electrons will not reach the electrodes, compromising its effectiveness. Even with immobilization, a means must also be provided for electrons to be transferred to and from the electrodes. This can be done either directly from the enzyme to the electrode (“direct electron transfer”) or with the aid of other chemicals that transfer electrons from the enzyme to the electrode (“mediated electron transfer”). The former technique is possible only with certain types of enzymes whose activation sites are close to the enzyme's surface, but doing so presents fewer toxicity risks for fuel cells intended to be used inside the human body. Finally, completely processing the complex fuels used in enzymatic biofuel cells requires a series of different enzymes for each step of the ‘metabolism’ process; producing some of the required enzymes and maintaining them at the required levels can pose problems.

History

Early work with biofuel cells, which began in the early 20th century, was purely of the microbial variety. Research on using enzymes directly for oxidation in biofuel cells began in the early 1960s, with the first enzymatic biofuel cell being produced in 1964. This research began as a product of NASA's interest in finding ways to recycle human waste into usable energy on board spacecraft, as well as a component of the quest for an artificial heart, specifically as a power source that could be put directly into the human body. These two applications – use of animal or vegetable products as fuel and development of a power source that can be directly implanted into the human body without external refueling – remain the primary goals for developing these biofuel cells. Initial results, however, were disappointing. While the early cells did successfully produce electricity, there was difficulty in transporting the electrons liberated from the glucose fuel to the fuel cell's electrode and further difficulties in keeping the system stable enough to produce electricity at all due to the enzymes’ tendency to move away from where they needed to be in order for the fuel cell to function. These difficulties led to an abandonment by biofuel cell researchers of the enzyme-catalyst model for nearly three decades in favor of the more conventional metal catalysts (principally platinum), which are used in most fuel cells. Research on the subject did not begin again until the 1980s after it was realized that the metallic-catalyst method was not going to be able to deliver the qualities desired in a biofuel cell, and since then work on enzymatic biofuel cells has revolved around the resolution of the various problems that plagued earlier efforts at producing a successful enzymatic biofuel cell.

However, many of these problems were resolved in 1998. In that year, it was announced that researchers had managed to completely oxidize methanol using a series (or “cascade”) of enzymes in a biofuel cell. Previous to this time, the enzyme catalysts had failed to completely oxidize the cell's fuel, delivering far lower amounts of energy than what was expected given what was known about the energy capacity of the fuel. While methanol is now far less relevant in this field as a fuel, the demonstrated method of using a series of enzymes to completely oxidize the cell's fuel gave researchers a way forward, and much work is now devoted to using similar methods to achieve complete oxidation of more complicated compounds, such as glucose. In addition, and perhaps what is more important, 1998 was the year in which enzyme “immobilization” was successfully demonstrated, which increased the usable life of the methanol fuel cell from just eight hours to over a week. Immobilization also provided researchers with the ability to put earlier discoveries into practice, in particular the discovery of enzymes that can be used to directly transfer electrons from the enzyme to the electrode. This process had been understood since the 1980s but depended heavily on placing the enzyme as close to the electrode as possible, which meant that it was unusable until after immobilization techniques were devised. In addition, developers of enzymatic biofuel cells have applied some of the advances in nanotechnology to their designs, including the use of carbon nanotubes to immobilize enzymes directly. Other research has gone into exploiting some of the strengths of the enzymatic design to dramatically miniaturize the fuel cells, a process that must occur if these cells are ever to be used with implantable devices. One research team took advantage of the extreme selectivity of the enzymes to completely remove the barrier between anode and cathode, which is an absolute requirement in fuel cells not of the enzymatic type. This allowed the team to produce a fuel cell that produces 1.1 microwatts operating at over half a volt in a space of just 0.01 cubic millimeters.

While enzymatic biofuel cells are not currently in use outside of the laboratory, as the technology has advanced over the past decade non-academic organizations have shown an increasing amount of interest in practical applications for the devices. In 2007, Sony announced that it had developed an enzymatic biofuel cell that can be linked in sequence and used to power an mp3 player, and in 2010 an engineer employed by the US Army announced that the Defense Department was planning to conduct field trials of its own "bio-batteries" in the following year. In explaining their pursuit of the technology, both organizations emphasized the extraordinary abundance (and extraordinarily low expense) of fuel for these cells, a key advantage of the technology that is likely to become even more attractive if the price of portable energy sources goes up, or if they can be successfully integrated into electronic human implants.

Feasibility of enzymes as catalysts

With respect to fuel cells, enzymes have several advantages to their incorporation. An important enzymatic property to consider is the driving force or potential necessary for successful reaction catalysis. Many enzymes operate at potentials close to their substrates which is most suitable for fuel cell applications.

Furthermore, the protein matrix surrounding the active site provides many vital functions; selectivity for the substrate, internal electron coupling, acidic/basic properties and the ability to bind to other proteins (or the electrode). Enzymes are more stable in the absence of proteases, while heat resistant enzymes can be extracted from thermophilic organisms, thus offering a wider range of operational temperatures. Operating conditions is generally between 20-50 °C and pH 4.0 to 8.0.

A drawback with the use of enzymes is size; given the large size of enzymes, they yield a low current density per unit electrode area due to the limited space. Since it is not possible to reduce enzyme size, it has been argued that these types of cells will be lower in activity. One solution has been to use three-dimensional electrodes or immobilization on conducting carbon supports which provide high surface area. These electrodes are extended into three-dimensional space which greatly increases the surface area for enzymes to bind thus increasing the current.

Hydrogenase-based biofuel cells

As per the definition of biofuel cells, enzymes are used as electrocatalysts at both the cathode and anode. In hydrogenase-based biofuel cells, hydrogenases are present at the anode for H2 oxidation in which molecular hydrogen is split into electrons and protons. In the case of H2/O2 biofuel cells, the cathode is coated with oxidase enzymes which then convert the protons into water.

Hydrogenase as an energy source

In recent years, research on hydrogenases has grown significantly due to scientific and technological interest in hydrogen. The bidirectional or reversible reaction catalyzed by hydrogenase is a solution to the challenge in the development of technologies for the capture and storage of renewable energy as fuel with use on demand. This can be demonstrated through the chemical storage of electricity obtained from a renewable source (e.g. solar, wind, hydrothermal) as H2 during periods of low energy demands. When energy is desired, H2 can be oxidized to produce electricity which is very efficient.

The use of hydrogen in energy converting devices has gained interest due to being a clean energy carrier and potential transportation fuel.

Feasibility of hydrogenase as catalysts

In addition to the advantages previously mentioned associated with incorporating enzymes in fuel cells, hydrogenase is a very efficient catalyst for H2 consumption forming electrons and protons. Platinum is typically the catalyst for this reaction however, the activity of hydrogenases are comparable without the issue of catalyst poisoning by H2S and CO. In the case of H2/O2 fuel cells, there is no production of greenhouse gases where the product is water.

With regards to structural advantages, hydrogenase is highly selective for its substrate. The lack of need for a membrane simplifies the biofuel cell design to be small and compact, given that hydrogenase does not react with oxygen (an inhibitor) and the cathode enzymes (typically laccase) does not react with the fuel. The electrodes are preferably made from carbon which is abundant, renewable and can be modified in many ways or adsorb enzymes with high affinity. The hydrogenase is attached to a surface which also extends the lifetime of the enzyme.

Challenges

There are several difficulties to consider associated with the incorporation of hydrogenase in biofuel cells. These factors must be taken into account to produce an efficient fuel cell.

Enzyme immobilization

Since the hydrogenase-based biofuel cell hosts a redox reaction, hydrogenase must be immobilized on the electrode in such a way that it can exchange electrons directly with the electrode to facilitate the transfer of electrons. This proves to be a challenge in that the active site of hydrogenase is buried in the center of the enzyme where the FeS clusters are used as an electron relay to exchange electrons with its natural redox partner.

Possible solutions for greater efficiency of electron delivery include the immobilization of hydrogenase with the most exposed FeS cluster close enough to the electrode or the use of a redox mediator to carry out the electron transfer. Direct electron transfer is also possible through the adsorption of the enzyme on graphite electrodes or covalent attachment to the electrode. Another solution includes the entrapment of hydrogenase in a conductive polymer.

Enzyme size

Immediate comparison of the size of hydrogenase with standard inorganic molecular catalysts reveal that hydrogenase is very bulky. It is approximately 5 nm in diameter compared to 1-5 nm for Pt catalysts. This limits the possible electrode coverage by capping the maximum current density.

Since altering the size of hydrogenase is not a possibility, to increase the density of enzyme present on the electrode to maintain fuel cell activity, a porous electrode can be used instead of one that is planar. This increases the electroactive area allowing more enzyme to be loaded onto the electrode. An alternative is to form films with graphite particles adsorbed with hydrogenase inside a polymer matrix. The graphite particles then can collect and transport electrons to the electrode surface.

Oxidative damage

In a biofuel cell, hydrogenase is exposed to two oxidizing threats. O2 inactivates most hydrogenases with the exception of [NiFe] through diffusion of O2 to the active site followed by destructive modification of the active site. O2 is the fuel at the cathode and therefore must be physically separated or else the hydrogenase enzymes at the anode would be inactivated. Secondly, there is a positive potential imposed on hydrogenase at the anode by the enzyme on the cathode. This further enhances the inactivation of hydrogenase by O2 causing even [NiFe] which was previously O2-tolerant, to be affected.

To avoid inactivation by O2, a proton exchange membrane can be used to separate the anode and cathode compartments such that O2 is unable to diffuse to and destructively modify the active site of hydrogenase.

Applications

Entrapment of hydrogenase in polymers

There are many ways to adsorb hydrogenases onto carbon electrodes that have been modified with polymers. An example is a study done by Morozov et al. where they inserted NiFe hydrogenase into polypyrrole films and to provide proper contact to the electrode, there were redox mediators entrapped into the film. This was successful because the hydrogenase density was high in the films and the redox mediator helped to connect all enzyme molecules for catalysis which was about the same power output as hydrogenase in solution.

Immobilizing hydrogenase on carbon nanotubes

Carbon nanotubes can also be used for a support for hydrogenase on the electrode due to their ability to assemble in large porous and conductive networks. These hybrids have been prepared using [FeFe] and [NiFe] hydrogenases. The [NiFe] hydrogenase isolated from A. aeolicus (thermophilic bacteria) was able to oxidize H2 with direct electron transfer without a redox mediator with a 10-fold higher catalytic current with stationary CNT-coated electrodes than with bare electrodes.

Another way of coupling hydrogenase to the nanotubes was to covalently bind them to avoid a time delay. Hydrogenase isolated from D. gigas (jumbo squid) was coupled to multiwalled carbon nanotube (MWCNT) networks and produced a current ~30 times higher than the graphite-hydrogenase anode. A slight drawback to this method is that the ratio of hydrogenase covering the surface of the nanotube network leaves hydrogenase to cover only the scarce defective spots in the network. It is also found that some adsorption procedures tend to damage the enzymes whereas covalently coupling them stabilized the enzyme and allows it to remain stable for longer. The catalytic activity of hydrogenase-MWCNT electrodes provided stability for over a month whereas the hydrogenase-graphite electrodes only lasted about a week.

Hydrogenase-based biofuel cell applications

A fully enzymatic hydrogen fuel cell was constructed by the Armstrong group who used the cell to power a watch. The fuel cell consisted of a graphite anode with hydrogenase isolated from R. metallidurans and a graphite cathode modified with fungal laccase. The electrodes were placed in a single chamber with a mixture of 3% H2 gas in air and there was no membrane due to the tolerance of the hydrogenase to oxygen. The fuel cell produced a voltage of 950mV and generated 5.2 uW/cm2 of electricity. Although this system was very functional, it was still not at optimum output due to the low accessible H2 levels, the lower catalytic activity of the oxygen tolerant hydrogenases and the lower density of catalysts on the flat electrodes.

This system was then later improved by adding a MWCNT network to increase the electrode area.

Applications

Self-powered biosensors

The beginning concept of applying enzymatic biofuel cells for self-powered biosensing applications has been introduced since 2001. With continued efforts, several types of self-powered enzyme-based biosensors have been demonstrated. In 2016, the first example of stretchable textile-based biofuel cells, acting as wearable self-powered sensors, was described. The smart textile device utilized a lactate oxidase-based biofuel cell, allowing real-time monitoring of lactate in sweat for on-body applications.

Cancer prevention

From Wikipedia, the free encyclopedia

Cancer prevention is the practice of taking active measures to decrease the incidence of cancer and mortality. The practice of prevention depends on both individual efforts to improve lifestyle and seek preventive screening, and socioeconomic or public policy related to cancer prevention. Globalized cancer prevention is regarded as a critical objective due to its applicability to large populations, reducing long term effects of cancer by promoting proactive health practices and behaviors, and its perceived cost-effectiveness and viability for all socioeconomic classes.

The majority of cancer cases are due to the accumulation of environmental pollution being inherited as epigenetic damage and most of these environmental factors are controllable lifestyle choices. Greater than a reported 75% of cancer deaths could be prevented by avoiding risk factors including: tobacco, overweight / obesity, an insufficient diet, physical inactivity, alcohol, sexually transmitted infections, and air pollution. Not all environmental causes are controllable, such as naturally occurring background radiation, and other cases of cancer are caused through hereditary genetic disorders. Current genetic engineering techniques under development may serve as preventive measures in the future. Future preventive screening measures can be additionally improved by minimizing invasiveness and increasing specificity by taking individual biological makeup into account, also known as "population-based personalized cancer screening."

Death rate adjusted for age for malignant cancer per 100,000 inhabitants in 2004.

While anyone can get cancer, age is one of the biggest factors that increases the risk of cancer: 3 out of 4 cancers are found in people aged 55 or older.

Risk reduction

Dietary

Advertisement for a healthy diet to possibly reduce cancer risk

An average 35% of human cancer mortality is attributed to the diet of the individual. Studies have linked excessive consumption of red or processed meat to an increased risk of breast cancer, colon cancer, and pancreatic cancer, a phenomenon which could be due to the presence of carcinogens in meats cooked at high temperatures. More specifically, a higher risk of breast cancer also has seemed to be associated with a higher intake of meat, including both red and processed meats.

Dietary recommendations for reducing cancer risk typically include an emphasis on vegetables, fruit, whole grains, and fish, and an avoidance of processed and red meat (beef, pork, lamb), animal fats, and refined carbohydrates. The World Cancer Research Fund recommends a diet rich in fruits and vegetables to reduce the risk of cancer. A diet rich in foods of plant origin, including non-starchy fruits and vegetables, non-starchy roots and tubers, and whole grains, may have protective effects against cancer. Consumption of coffee is associated with a reduced risk of liver cancer and endometrial cancer. Substituting processed foods, such as biscuits, cakes or white bread – which are high in fat, sugars and refined starches – with a plant-based diet may reduce the risk of cancer. In some cases, plant-based diets have shown to be inversely associated with overall cancer risk.

While many dietary recommendations have been proposed to reduce the risk of cancer, the evidence to support them is not definitive. The primary dietary factors that increase risk are obesity and alcohol consumption; with a diet low in fruits and vegetables and high in red meat being implicated but not confirmed. A 2014 meta-analysis did not find a relationship between consuming fruits and vegetables and reduced cancer risk.

Physical activity

Research shows that regular physical activity may help to reduce cancer up to 30%, with up to 300 minutes per week of moderate to vigorous intensity of physical activity recommended.

Possible mechanisms by which physical activity may reduce cancer risk include lowering levels of estrogen and insulin, reducing inflammation, and strengthening the immune system.

Medication and supplements

In the general population, NSAIDs reduce the risk of colorectal cancer; however, due to the cardiovascular and gastrointestinal side effects, they cause overall harm when used to lower cancer risk. Aspirin has been found to reduce the risk of death from cancer by about 7%. COX-2 inhibitors may decrease the rate of polyp formation in people with familial adenomatous polyposis however are associated with the same adverse effects as NSAIDs. Daily use of tamoxifen or raloxifene has been demonstrated to reduce the risk of developing breast cancer in high-risk women. The benefit verses harm for 5-alpha-reductase inhibitor such as finasteride is not clear.

Vitamins have not been found to be effective at reducing cancer risk, although low blood levels of vitamin D are correlated with increased cancer risk. Whether this relationship is causal and vitamin D supplementation is protective is not determined. Beta-carotene supplementation has been found to increase lung cancer rates in those who are at high risk. Folic acid supplementation has not been found effective in preventing colon cancer and may increase colon polyps. A 2018 systematic review concluded that selenium has no beneficial effect in reducing the risk of cancer based on high quality evidence.

Avoidance of carcinogens

The United States National Toxicology Program (NTP) has identified the chemical substances listed below as known human carcinogens in the NTP's 15th Report on Carcinogens. Simply because a substance has been designated as a carcinogen, however, does not mean that the substance will necessarily cause cancer. Many factors influence whether a person exposed to a carcinogen will develop cancer, including the amount and duration of the exposure and the individual's genetic background.

Ingestion

Inhalation

Skin exposure

Recent Updates in Carcinogen Classification

Updated evaluations by the International Agency for Research on Cancer (IARC) continue to confirm the carcinogenicity of long-recognized agents such as asbestos and benzene, which are included above in the NTP 15th report on carcinogens, while also guiding the assessment of emerging substances in consumer products. A meta-analysis published in 2023 found that exposure to certain endocrine-disrupting chemicals, including p,p′-DDT (and its metabolite p,p′-DDE) and several polychlorinated biphenyl (PCB) variants, was associated with increased risk of breast cancer.

Vaccination

Anti-cancer vaccines can be preventive or be used as therapeutic treatment. All such vaccines incite adaptive immunity by enhancing cytotoxic T lymphocyte (CTL) recognition and activity against tumor-associated or tumor-specific antigens (TAA and TSAs).

Vaccines have been developed that prevent infection by some carcinogenic viruses. Human papillomavirus vaccine (Gardasil and Cervarix) decreases the risk of developing cervical cancer. The hepatitis B vaccine prevents infection with hepatitis B virus and thus decreases the risk of liver cancer. The administration of human papillomavirus and hepatitis B vaccinations is recommended when resources allow.

Some cancer vaccines are usually immunoglobulin-based and target antigens specific to cancer or abnormal human cells. These vaccines may be given to treat cancer during the progression of disease to boost the immune system's ability to recognize and attack cancer antigens as foreign entities. Antibodies for cancer cell vaccines may be taken from the patient's own body (autologous vaccine) or from another patient (allogeneic vaccine). Several autologous vaccines, such as Oncophage for kidney cancer and Vitespen for a variety of cancers, have either been released or are undergoing clinical trial. FDA-approved vaccines, such as Sipuleucel-T for metastasizing prostate cancer or Nivolumab for melanoma and lung cancer can act either by targeting over-expressed or mutated proteins or by temporarily inhibiting immune checkpoints to boost immune activity.

Screening

Screening procedures, commonly sought for more prevalent cancers, such as colon, breast, and cervical, have greatly improved in the past few decades from advances in biomarker identification and detection. Early detection of pancreatic cancer biomarkers was accomplished using a SERS-based immunoassay approach. A SERS-based multiplex protein biomarker detection platform in a microfluidic chip can be used to detect several protein biomarkers to predict the type of disease and critical biomarkers and increase the chance of diagnosis between diseases with similar biomarkers (e.g. pancreatic cancer, ovarian cancer, and pancreatitis).

To improve the chances of detecting cancer early, all eligible people should take advantage of cancer screening services. However, overall uptake of cancer screening among the general population is not widespread, especially among disadvantaged groups (e.g. those with low income, mental illnesses, or are from different ethnic groups) who face different barriers that lead to lower attendance rates.

Cervical cancer

Cervical cancer is usually screened through in vitro examination of the cells of the cervix (e.g. Pap smear), colposcopy, or direct inspection of the cervix (after application of dilute acetic acid), or testing for HPV, the oncogenic virus that is the necessary cause of cervical cancer. Screening is recommended for women over 21 years, initially women between 21 and 29 years old are encouraged to receive Pap smear screens every three years, and those over 29 every five years. For women older than the age of 65 and with no history of cervical cancer or abnormality, and with an appropriate precedence of negative Pap test results may cease regular screening.

Still, adherence to recommended screening plans depends on age and may be linked to "educational level, culture, psychosocial issues, and marital status," further emphasizing the importance of addressing these challenges in regards to cancer screening.

Colorectal cancer

Colorectal cancer is most often screened with the fecal occult blood test (FOBT). Variants of this test include guaiac-based FOBT (gFOBT), the fecal immunochemical test (FIT), and stool DNA testing (sDNA). Further testing includes flexible sigmoidoscopy (FS), total colonoscopy (TC), or computed tomography (CT) scans if a total colonoscopy is non-ideal. The recommended age at which to begin and continue screening is 50-75 years. However, this is highly dependent on medical history and exposure to risk factors for colorectal cancer. Effective screening has been shown to reduce the incidence of colorectal cancer by 33% and colorectal cancer mortality by 43%.

Breast cancer

The estimated number of new breast cancer cases in the US in 2018 is predicted to be more than 1.7 million, with more than six hundred thousand deaths. Factors such as breast size, reduced physical activity, obesity and overweight status, infertility and never having had children, hormone replacement therapy (HRT), and genetics are risk factors for breast cancer. Mammograms are widely used to screen for breast cancer, and are recommended for women 50–74 years of age by the US Preventive Services Task Force (USPSTF). However, the USPSTF does not recommend mammograms for women 40–49 years old due to the possibility of overdiagnosis.

Preventable causes of cancer

As of 2017, tobacco use, diet and nutrition, physical activity, obesity/overweight status, infectious agents, and chemical and physical carcinogens have been reported to be the leading areas where cancer prevention can be practiced through enacting positive lifestyle changes, getting appropriate regular screening, and getting vaccinated.

The development of many common cancers are incited by such risk factors. For example, consumption of tobacco and alcohol, a medical history of genital warts and STDs, immunosuppression, unprotected sex, and early age of first sexual intercourse and pregnancy all may serve as risk factors for cervical cancer. Obesity, red meat or processed meat consumption, tobacco and alcohol, and a medical history of inflammatory bowel diseases are all risk factors for colorectal cancer (CRC). On the other hand, exercise and consumption of vegetables may help decrease the risk of CRC.

Several preventable causes of cancer were highlighted in Doll and Peto's landmark 1981 study, estimating that 75 – 80% of cancers in the United States could be prevented by avoidance of 11 different factors. A 2013 review of more recent cancer prevention literature by Schottenfeld et al., summarizing studies reported between 2000 and 2010, points to most of the same avoidable factors identified by Doll and Peto. However, Schottenfeld et al. considered fewer factors (e.g. non inclusion of diet) in their review than Doll and Peto, and indicated that avoidance of these fewer factors would result in prevention of 60% of cancer deaths. The table below indicates the proportions of cancer deaths attributed to different factors, summarizing the observations of Doll and Peto, Shottenfeld et al. and several other authors, and shows the influence of major lifestyle factors on the prevention of cancer, such as tobacco, an unhealthy diet, obesity and infections.

Proportions of cancer deaths in the United States attributed to different factors
Factor Doll &
Peto
Schottenfeld
et al.
Other reports
Tobacco 30% 30% 38% men, 23% women, 30%, 25-30%
Unhealthy diet 35% - 32%, 10%, 30-35%
Obesity * 10% 14% women, 20% men, among non-smokers, 10-20%, 19-20% United States, 16-18% Great Britain, 13% Brazil, 11-12% China
Infection 10% 5-8% 7-10%, 8% developed nations, 26% developing nations, 10% high income, 25% African
Alcohol 3% 3-4% 3.6%, 8% USA, 20% France
Occupational exposures 4% 3-5% 2-10%, may be 15-20% in men
Radiation (solar and ionizing) 3% 3-4% up to 10%
Physical inactivity * <5% 7%
Reproductive and sexual behavior 1-13% - -
Pollution 2% - -
Medicines and medical procedures 1% - -
Industrial products <1% - -
Food additives <1% - -

*Included in diet

†Carcinogenic infections include: for the uterine cervix (human papillomavirus [HPV]), liver (hepatitis B virus [HBV] and hepatitis C virus [HCV]), stomach (Helicobacter pylori [H pylori]), lymphoid tissues (Epstein-Barr virus [EBV]), nasopharynx (EBV), urinary bladder (Schistosoma hematobium), and biliary tract (Opisthorchis viverrini, Clonorchis sinensis)

History of cancer prevention

Cancer has been thought to be a preventable disease since the time of Roman physician Galen, who observed that an unhealthy diet was correlated with cancer incidence. In 1713, Italian physician Ramazzini hypothesized that abstinence caused lower rates of cervical cancer in nuns. Further observation in the 18th century led to the discovery that certain chemicals, such as tobacco, soot and tar (leading to scrotal cancer in chimney sweeps, as reported by Percivall Pott in 1775), could serve as carcinogens for humans. Although Pott suggested preventive measures for chimney sweeps (wearing clothes to prevent contact bodily contact with soot), his suggestions were only put into practice in Holland, resulting in decreasing rates of scrotal cancer in chimney sweeps. Later, the 19th century brought on the onset of the classification of chemical carcinogens.

In the early 20th century, physical and biological carcinogens, such as X-ray radiation or the Rous Sarcoma Virus discovered 1911, were identified. Despite observed correlation of environmental or chemical factors with cancer development, there was a deficit of formal prevention research and lifestyle changes for cancer prevention were not feasible during this time.

In Europe, in 1987 the European Commission launched the European Code Against Cancer to help educate the public about actions they can take to reduce their risk of getting cancer. The first version of the code covered 10 recommendations covering tobacco, alcohol, diet, weight, sun exposure, exposure to known carcinogens, early detection and participation in organized breast and cervical cancer screening programs. In the early 1990s, the European School of Oncology led a review of the code and added details about the scientific evidence behind each of the recommendations. Later updates were coordinated by the International Agency for Research on Cancer. The fourth edition of the code, developed in 2012‒2013, also includes recommendations on participation in vaccination programs for hepatitis B (infants) and human papillomavirus (girls), breast feeding and hormone replacement therapy, and participation in organized colorectal cancer screening programs.

Alternative medicine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Alternative_medicine     Alternative ...