Search This Blog

Saturday, July 7, 2018

Cosmological argument

From Wikipedia, the free encyclopedia

In natural theology and philosophy, a cosmological argument is an argument in which the existence of a unique being, generally seen as some kind of god, is deduced or inferred from facts or alleged facts concerning causation, change, motion, contingency, or finitude in respect of the universe as a whole or processes within it. It is traditionally known as an argument from universal causation, an argument from first cause, or the causal argument, and is more precisely a cosmogonical argument (about the origin). Whichever term is employed, there are three basic variants of the argument, each with subtle yet important distinctions: the arguments from in causa (causality), in esse (essentiality), and in fieri (becoming).

The basic premises of all of these are the concept of causality and the Universe having a beginning. The conclusion of these arguments is first cause, subsequently deemed to be God. The history of this argument goes back to Aristotle or earlier, was developed in Neoplatonism and early Christianity and later in medieval Islamic theology during the 9th to 12th centuries, and re-introduced to medieval Christian theology in the 13th century by Thomas Aquinas. The cosmological argument is closely related to the principle of sufficient reason as addressed by Gottfried Leibniz and Samuel Clarke, itself a modern exposition of the claim that "nothing comes from nothing" attributed to Parmenides.

Contemporary defenders of cosmological arguments include William Lane Craig,[3] Robert Koons,[4] Alexander Pruss,[5] and William L. Rowe.[6]

History

Plato and Aristotle, depicted here in Raphael's The School of Athens, both developed first cause arguments.

Plato (c. 427–347 BC) and Aristotle (c. 384–322 BC) both posited first cause arguments, though each had certain notable caveats.[7] In The Laws (Book X), Plato posited that all movement in the world and the Cosmos was "imparted motion". This required a "self-originated motion" to set it in motion and to maintain it. In Timaeus, Plato posited a "demiurge" of supreme wisdom and intelligence as the creator of the Cosmos.

Aristotle argued against the idea of a first cause, often confused with the idea of a "prime mover" or "unmoved mover" (πρῶτον κινοῦν ἀκίνητον or primus motor) in his Physics and Metaphysics.[8] Aristotle argued in favor of the idea of several unmoved movers, one powering each celestial sphere, which he believed lived beyond the sphere of the fixed stars, and explained why motion in the universe (which he believed was eternal) had continued for an infinite period of time. Aristotle argued the atomist's assertion of a non-eternal universe would require an uncaused cause — in his terminology, an efficient first cause — an idea he considered a non-sensical flaw in the reasoning of the atomists.

Like Plato, Aristotle believed in an eternal cosmos with no beginning and no end (which in turn follows Parmenides' famous statement that "nothing comes from nothing"). In what he called "first philosophy" or metaphysics, Aristotle did intend a theological correspondence between the prime mover and deity (presumably Zeus); functionally, however, he provided an explanation for the apparent motion of the "fixed stars" (now understood as the daily rotation of the Earth). According to his theses, immaterial unmoved movers are eternal unchangeable beings that constantly think about thinking, but being immaterial, they're incapable of interacting with the cosmos and have no knowledge of what transpires therein. From an "aspiration or desire",[9] the celestial spheres, imitate that purely intellectual activity as best they can, by uniform circular motion. The unmoved movers inspiring the planetary spheres are no different in kind from the prime mover, they merely suffer a dependency of relation to the prime mover. Correspondingly, the motions of the planets are subordinate to the motion inspired by the prime mover in the sphere of fixed stars. Aristotle's natural theology admitted no creation or capriciousness from the immortal pantheon, but maintained a defense against dangerous charges of impiety.

Plotinus, a third-century Platonist, taught that the One transcendent absolute caused the universe to exist simply as a consequence of its existence (creatio ex deo). His disciple Proclus stated "The One is God".[citation needed]

Centuries later, the Islamic philosopher Avicenna (c. 980–1037) inquired into the question of being, in which he distinguished between essence (Mahiat) and existence (Wujud). He argued that the fact of existence could not be inferred from or accounted for by the essence of existing things, and that form and matter by themselves could not originate and interact with the movement of the Universe or the progressive actualization of existing things. Thus, he reasoned that existence must be due to an agent cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must coexist with its effect and be an existing thing.[10]

Steven Duncan writes that "it was first formulated by a Greek-speaking Syriac Christian neo-Platonist, John Philoponus," who claims to find a contradiction between the Greek pagan insistence on the eternity of the world and the Aristotelian rejection of the existence of any actual infinite." Referring to the argument as the "'Kalam' cosmological argument", Duncan asserts that it "received its fullest articulation at the hands of [medieval] Muslim and Jewish exponents of Kalam ("the use of reason by believers to justify the basic metaphysical presuppositions of the faith)."[11]

Thomas Aquinas (c. 1225–1274) adapted and enhanced the argument he found in his reading of Aristotle and Avicenna to form one of the most influential versions of the cosmological argument. His conception of First Cause was the idea that the Universe must be caused by something that is itself uncaused, which he claimed is that which we call God: "The second way is from the nature of the efficient cause. In the world of sense we find there is an order of efficient causes. There is no case known (neither is it, indeed, possible) in which a thing is found to be the efficient cause of itself; for so it would be prior to itself, which is impossible. Now in efficient causes it is not possible to go on to infinity, because in all efficient causes following in order, the first is the cause of the intermediate cause, and the intermediate is the cause of the ultimate cause, whether the intermediate cause be several, or only one. Now to take away the cause is to take away the effect. Therefore, if there be no first cause among efficient causes, there will be no ultimate, nor any intermediate cause. But if in efficient causes it is possible to go on to infinity, there will be no first efficient cause, neither will there be an ultimate effect, nor any intermediate efficient causes; all of which is plainly false. Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God."[14][dubious ] Importantly, Aquinas' Five Ways, given the second question of his Summa Theologica, are not the entirety of Aquinas' demonstration that the Christian God exists. The Five Ways form only the beginning of Aquinas' Treatise on the Divine Nature.

Versions of the argument

Argument from contingency

In the scholastic era, Aquinas formulated the "argument from contingency", following Aristotle in claiming that there must be something to explain why the Universe exists. Since the Universe could, under different circumstances, conceivably not exist (contingency), its existence must have a cause – not merely another contingent thing, but something that exists by necessity (something that must exist in order for anything else to exist).[15] In other words, even if the Universe has always existed, it still owes its existence to an Uncaused Cause,[16] Aquinas further said: "...and this we understand to be God."[17]

Aquinas's argument from contingency allows for the possibility of a Universe that has no beginning in time. It is a form of argument from universal causation. Aquinas observed that, in nature, there were things with contingent existences. Since it is possible for such things not to exist, there must be some time at which these things did not in fact exist. Thus, according to Aquinas, there must have been a time when nothing existed. If this is so, there would exist nothing that could bring anything into existence. Contingent beings, therefore, are insufficient to account for the existence of contingent beings: there must exist a necessary being whose non-existence is an impossibility, and from which the existence of all contingent beings is derived.

The German philosopher Gottfried Leibniz made a similar argument with his principle of sufficient reason in 1714. "There can be found no fact that is true or existent, or any true proposition," he wrote, "without there being a sufficient reason for its being so and not otherwise, although we cannot know these reasons in most cases." He formulated the cosmological argument succinctly: "Why is there something rather than nothing? The sufficient reason [...] is found in a substance which [...] is a necessary being bearing the reason for its existence within itself."[18]

In esse and in fieri

The difference between the arguments from causation in fieri and in esse is a fairly important one. In fieri is generally translated as "becoming", while in esse is generally translated as "in essence". In fieri, the process of becoming, is similar to building a house. Once it is built, the builder walks away, and it stands on its own accord; compare the watchmaker analogy. (It may require occasional maintenance, but that is beyond the scope of the first cause argument.)

In esse (essence) is more akin to the light from a candle or the liquid in a vessel. George Hayward Joyce, SJ, explained that "...where the light of the candle is dependent on the candle's continued existence, not only does a candle produce light in a room in the first instance, but its continued presence is necessary if the illumination is to continue. If it is removed, the light ceases. Again, a liquid receives its shape from the vessel in which it is contained; but were the pressure of the containing sides withdrawn, it would not retain its form for an instant." This form of the argument is far more difficult to separate from a purely first cause argument than is the example of the house's maintenance above, because here the First Cause is insufficient without the candle's or vessel's continued existence.[19]

Thus, Leibniz' argument is in fieri, while Aquinas' argument is both in fieri and in esse. This distinction is an excellent example of the difference between a deistic view (Leibniz) and a theistic view (Aquinas). As a general trend, the modern slants on the cosmological argument, including the Kalam argument, tend to lean very strongly towards an in fieri argument.[citation needed]

Kalām cosmological argument

William Lane Craig gives this argument in the following general form:[20]
  1. Whatever begins to exist has a cause.
  2. The Universe began to exist.
  3. Therefore, the Universe had a cause.
Craig explains, by nature of the event (the Universe coming into existence), attributes unique to (the concept of) God must also be attributed to the cause of this event, including but not limited to: omnipotence, Creator, being eternal and absolute self-sufficiency. Since these attributes are unique to God, anything with these attributes must be God. Something does have these attributes: the cause; hence, the cause is God, the cause exists; hence, God exists.

Craig defends the second premise, that the Universe had a beginning starting with Al-Ghazali's proof that an actual infinite is impossible. However, If the universe never had a beginning then there indeed would be an actual infinite, an infinite amount of cause and effect events. Hence, the Universe had a beginning.

Objections and counterarguments

What caused the First Cause?

One objection to the argument is that it leaves open the question of why the First Cause is unique in that it does not require any causes. Proponents argue that the First Cause is exempt from having a cause, while opponents argue that this is special pleading or otherwise untrue.[1] Critics often press that arguing for the First Cause's exemption raises the question of why the First Cause is indeed exempt,[21] whereas defenders maintain that this question has been answered by the various arguments, emphasizing that none of its major forms rests on the premise that everything has a cause.[22]

Secondly, it is argued that the premise of causality has been arrived at via a posteriori (inductive) reasoning, which is dependent on experience. David Hume highlighted this problem of induction and argued that causal relations were not true a priori. However, as to whether inductive or deductive reasoning is more valuable still remains a matter of debate, with the general conclusion being that neither is prominent.[23] Opponents of the argument tend to argue that it is unwise to draw conclusions from an extrapolation of causality beyond experience.[1]

Not evidence for a theist God

The basic cosmological argument merely establishes that a First Cause exists, not that it has the attributes of a theistic god, such as omniscience, omnipotence, and omnibenevolence.[24] This is why the argument is often expanded to show that at least some of these attributes are necessarily true, for instance in the modern Kalam argument given above.[1]

Existence of causal loops

A causal loop is a form of predestination paradox arising where traveling backwards in time is deemed a possibility. A sufficiently powerful entity in such a world would have the capacity to travel backwards in time to a point before its own existence, and to then create itself, thereby initiating everything which follows from it.

The usual reason which is given to refute the possibility of a causal loop is it requires that the loop as a whole be its own cause. Richard Hanley argues that causal loops are not logically, physically, or epistemically impossible: "[In timed systems,] the only possibly objectionable feature that all causal loops share is that coincidence is required to explain them."[25]

Existence of infinite causal chains

David Hume and later Paul Edwards have invoked a similar principle in their criticisms of the cosmological argument. Rowe has called the principle the Hume-Edwards principle:[26]
If the existence of every member of a set is explained, the existence of that set is thereby explained.
Nevertheless, David White argues that the notion of an infinite causal regress providing a proper explanation is fallacious.[27] Furthermore, Demea states that even if the succession of causes is infinite, the whole chain still requires a cause.[28] To explain this, suppose there exists a causal chain of infinite contingent beings. If one asks the question, "Why are there any contingent beings at all?", it won’t help to be told that "There are contingent beings because other contingent beings caused them." That answer would just presuppose additional contingent beings. An adequate explanation of why some contingent beings exist would invoke a different sort of being, a necessary being that is not contingent.[29] A response might suppose each individual is contingent but the infinite chain as a whole is not; or the whole infinite causal chain to be its own cause.

Severinsen argues that there is an "infinite" and complex causal structure.[30] White tried to introduce an argument “without appeal to the principle of sufficient reason and without denying the possibility of an infinite causal regress”.[31]

Big Bang cosmology

Some cosmologists and physicists argue that a challenge to the cosmological argument is the nature of time: "One finds that time just disappears from the Wheeler–DeWitt equation"[32] (Carlo Rovelli). The Big Bang theory states that it is the point in which all dimensions came into existence, the start of both space and time.[33] Then, the question "What was there before the Universe?" makes no sense; the concept of "before" becomes meaningless when considering a situation without time.[33] This has been put forward by J. Richard Gott III, James E. Gunn, David N. Schramm, and Beatrice Tinsley, who said that asking what occurred before the Big Bang is like asking what is north of the North Pole.[33] However, some cosmologists and physicists do attempt to investigate causes for the Big Bang, using such scenarios as the collision of membranes.[34]

Philosopher Edward Feser states that classical philosophers' arguments for the existence of God do not care about the Big Bang or whether the universe had a beginning. The question is not about what got things started or how long they have been going, but rather what keeps them going.[35]

Alternatively, the above objections can be dispelled by separating the Cosmological Argument from the A-Theory of Time[36] and subsequently discussing God as a timeless (rather than "before" in a linear sense) cause of the Big Bang. There is also a Big Bang Argument, which is a variation of the Cosmological Argument using the Big Bang Theory to validate the premise that the Universe had a beginning.[37]

Reprogramming your Biochemistry for Immortality: An Interview with Ray Kurzweil by David Jay Brown

March 8, 2006 by Ray Kurzweil
Original link:  http://www.kurzweilai.net/reprogramming-your-biochemistry-for-immortality-an-interview-with-ray-kurzweil-by-david-jay-brown
 
Scientists are now talking about people staying young and not aging. Ray Kurzweil is taking it a step further: “In addition to radical life extension, we’ll also have radical life expansion. The nanobots will be able to go inside the brain and extend our mental functioning by interacting with our biological neurons.”

Interview conducted by David Jay Brown on February 8, 2006. This interview will be published in Brown’s upcoming book Mavericks of Medicine(2006). Published on KurzweilAI.net March 8, 2006.

Ray Kurzweil is a computer scientist, software developer, inventor, entrepreneur, philosopher, and a leading proponent of radical life extension. He is the coauthor (with Terry Grossman, M.D.) of Fantastic Voyage: Live Long Enough to Live Forever, which is one of the most intriguing and exciting books on life extension around. Kurzweil and Grossman’s approach to health and longevity combines the most current and practical medical knowledge with a soundly-based, yet awe-inspiring visionary perspective of what’s to come.

Kurzweil’s philosophy is built upon the premise that we now have the knowledge to identify and correct the problems caused by most unhealthy genetic predispositions. By taking advantage of the opportunities afforded us by the genomic testing, nutritional supplements, and lifestyle adjustments, we can live long enough to reap the benefits of advanced biotechnology and nanotechnology, which will ultimately allow us to conquer aging and live forever. At the heart of Kurzweil’s optimistic philosophy is the notion that human knowledge is growing exponentially, not linearly, and this fact is rarely taken into account when people try to predict the rate of technological advance in the future. Kurzweil predicts that at the current rate of knowledge expansion we’ll have the technology to completely conquer aging within the next couple of decades.

I spoke with Ray on February 8, 2006. Ray speaks very precisely, and he chooses his words carefully. He presents his ideas with a lot of confidence, and I found his optimism to be contagious. We spoke about the importance of genomic testing, some of the common misleading ideas that people have about health, and how biotechnology and nanotechnology will radically affect our longevity in the future.

David: What inspired your interest in life extension?

Ray: Probably the first incident that got me on this path was my father’s illness. This began when I was fifteen, and he died seven years later of heart disease when I was twenty-two. He was fifty-eight. I’ll actually be fifty-eight this Sunday. I sensed a dark cloud over my future, feeling like there was a good chance that I had inherited his disposition to heart disease. When I was thirty-five, I was diagnosed with Type 2 diabetes, and the conventional medical approach made it worse.

So I really approached the situation as an inventor, as a problem to be solved. I immersed myself in the scientific literature, and came up with an approach that allowed me to overcome my diabetes. My levels became totally normal, and in the course of this process I discovered that I did indeed have a disposition, for example, to high cholesterol. My cholesterol was 280 and I also got that down to around 130. That was twenty-two years ago.

I wrote a bestselling health book, which came out in 1993 about that experience, and the program that I’d come up with. That’s what really got me on this path of realizing that—if you’re aggressive enough about reprogramming your biochemistry—you can find the ideas that can help you to overcome your genetic dispositions, because they’re out there. They exist.

About seven years ago, after my book The Age of Spiritual Machines came out in 1999, I was at a Foresight Institute conference. I met Terry Grossman there, and we struck up a conversation about this subject—nutrition and health. I went to see him at his longevity clinic in Denver for an evaluation, and we built a friendship. We started exchanging emails about health issues—and that was 10,000 emails ago. We wrote this book Fantastic Voyage together, which really continues my quest. And he also has his own story about how he developed similar ideas, and how we collaborated.

There’s really a lot of knowledge available right now, although, previously, it has not been packaged in the same way that we did it. We have the knowledge to reprogram our biochemistry to overcome disease and aging processes. We can dramatically slow down aging, and we can really overcome conditions such as atherosclerosis, that leads to almost all heart attacks and strokes, diabetes, and we can substantially reduce the risk of cancer with today’s knowledge. And, as you saw from the book, all of that is just what we call ‘Bridge One.’ We’re not saying that taking lots of supplements and changing your diet is going enable you to live five hundred years. But it will enable Baby Boomers—like Dr. Grossman and myself, and our contemporaries—to be in good shape ten or fifteen years from now, when we really will have the full flowering of the biotechnology revolution, which is ‘Bridge Two.’

Now, this gets into my whole theory of information technology. Biology has become an information technology. It didn’t used to be. Biology used to be hit or miss. We’d just find something that happened to work. We didn’t really understand why it worked, and, invariably, these tools, these drugs, had side-effects. They were very crude tools. Drug development was called drug discovery, because we really weren’t able to reprogram biology. That is now changing. Our understanding of biology, and the ability to manipulate it, is becoming an information technology. We’re understanding the information processes that underlie disease processes, like atherosclerosis, and we’re gaining the tools to reprogram those processes.

Drug development is now entering an era of rational drug design, rather than drug discovery. The important point to realize is that the progress is exponential, not linear. Invariably people—including sophisticated people—do not take that into consideration, and it makes all the difference in the world. The mainstream skeptics declared the fifteen year genome project a failure after seven and half years because only one percent of the project was done. The skeptics said, I told you this wasn’t going to work—here you are halfway through the project and you’ve hardly done anything. But the progress was exponential, doubling every year, and the last seven doublings go from one percent to a hundred percent. So the project was done on time. It took fifteen years to sequence HIV. We sequenced the SARS virus in thirty-one days.

There are many other examples of that. We’ve gone from ten dollars to sequence one base pair in 1990 to a penny today. So in ten or fifteen years from now it’s going to be a very different landscape. We really will have very powerful interventions, in the form of rationally-designed drugs that can precisely reprogram our biochemistry. We can do it to a large extent today with supplements and nutrition, but it takes a more extensive effort. We’ll have much more powerful tools fifteen years from, so I want it to be in good shape at that time.

Most of my Baby Boomer contemporaries are completely oblivious of this perspective. They just assume that aging is part of the cycle of human life, and at 65 or 70 you start slowing down. Then at eighty you’re dead. So they’re getting ready to retire, and are really unaware of this perspective that things are going to be very different ten or fifteen years from now. This insight really should motivate them to be aggressive about using today’s knowledge. Of course all of this will lead to ‘Bridge Three’ about twenty years from now—the nanotechnology revolution—where we can go beyond the limitations of biology. We’ll have programmable nanobots that can keep us healthy from inside, and truly provide truly radical life extension.

So that’s the genesis. My interest in life extension stems primarily from my having been diagnosed with Type 2 diabetes. I really consider the diabetes to be a blessing because it prodded me to overcome it, and, in so doing, I realized that I didn’t just have an approach for diabetes, but a general attitude and approach to overcome any health problem, that we really can find the ideas and apply them to overcome the genetic dispositions that we have. There’s a common wisdom that your genes are eighty percent of your health and longevity and lifestyle is only twenty percent. Well, that’s true if you follow the generally, watered-down guidelines that our health institutions put out. But if you follow the optimal guidelines that we talk about, you can really overcome almost any genetic disposition. We do have the knowledge to do that.

David: What do you think are some of the common misleading ideas that people have about health and longevity?

Ray: One thing that I just eluded to is the compromised recommendations from our health authorities. I just had a lengthy debate with the Joslin Diabetes Center, which is considered the world’s leading diabetes treatment and research center. I’m on the board, and they’ve just come out with new nutritional guidelines, which are highly compromised. They’re far from ideal, and they acknowledge that. They say, well, we have enough trouble getting people to follow these guidelines, let alone the stricter guidelines that you recommend. And my reply is, you have trouble getting people to follow your guidelines because they don’t work. If people followed your guidelines very precisely they’d still have Type 2 diabetes. They’d still have to take harsh drugs or insulin.

If they follow my guidelines the situation is quite different. I’ve counseled many people about Type 2 diabetes, and Dr. Grossman has treated many people with it, and they come back and they have completely normal levels. Their symptoms are gone, and they don’t have to take insulin or harsh drugs. They feel liberated, and that’s extremely motivating. In many ways it’s easier to make a stricter change. To dramatically reduce your high Glycemic index carbs is actually easier than moderately reducing them, because if you moderately reduce them you don’t get rid of the cravings for carbs. Carbs are addictive, and it’s just like trying to cut down a little bit on cigarettes. It’s actually easier to cut cigarettes out completely, and it’s also easier to largely cut out high Glycemic index starches and sugars, because the cravings go away and it’s much easier to follow. But, most importantly, it works along with a few supplements and exercise to overcome most cases of Type 2 Diabetes.

However, this doesn’t seem to be the attitude our health authorities. The nutritional recommendations are consistently compromised. There’s almost no understanding of the role of nutritional supplements, which can be very powerful. I take two hundred and fifty supplements a day, and I monitor my body regularly. I’m not just flying without instrumentation. Being an engineer, I like data and I monitor fifty or sixty different blood levels every few months, and I’m constantly fine-tuning my program. All of my blood levels are ideal. My Homocysteine level many years ago was eleven, but now it’s five. My C-reactive protein is 0.1. My cholesterol is 130. My LDL is about 60, and my HDL—which was 28—is now close to sixty. And so on and so forth.

I’ve also taken biological aging tests, which measure things like tactile sensitivity, reaction time, memory, and decision-making speed. There are forty different tests, and you compare your score to medians for different populations at different ages. When I was forty I came out at about thirty-eight. Now I’m fifty-seven—at least for a few more days—and I come out at forty. So, according to these tests, I’ve only aged two years in the last seventeen years. Now you can dispute the absolute validity of these biological aging tests. It’s just a number, but it’s just evidence that this program is working.

David: Why do you think that genomic testing is important?

Ray: Our program is very much not a one size fits all. It’s not a one-trick pony. We’re not saying that if you lower your carbs, lower your fat, or eat a grapefruit a day then everything will be fine. In fact, our publisher initially had a problem with this, but they actually got behind it enthusiastically, because it fundamentally differs, as you know, from most health books that really do have just one idea. We earnestly try to provide a comprehensive understanding of your biology and your body, which does have some complexity to it. Then we let people apply these principles to their own lives.

It is important to emphasize the issues that are concerns for yourself. We use an analogy of stepping backwards towards a cliff. It’s much easier to change direction before you fall off the cliff. But, generally, medicine doesn’t get involved until the eruption of clinical disease. Someone has a heart attack, or they develop clinical cancer, and that’s very often akin to falling off a cliff. One third of first heart attacks are fatal, and another third cause permanent damage to the heart muscle.

It’s much easier to catch these conditions beforehand. You don’t just catch heart disease or cancer walking down the street one day. These are many years or decades in the making, and you can see where you are in the progression of these diseases. So it’s very important to know thyself, to access your own situation. Genetic testing is important because you can see what dispositions you have. If you have certain genes that dispose you to heart disease, or conversely cancer, or diabetes, then you would give a higher priority to managing those issues, and do more tests to see where you are in the progression of those conditions. Let’s say you do a test and it says you have a genetic disposition to Type 2 diabetes. So you should do a glucose-tolerance test. In fact, we describe a more sophisticated form of that in the book, where you measure insulin as well, and can see if you have early stages of insulin resistance.

Perhaps you have metabolic syndrome, which a very substantial fraction of the population has. If you have these early harbingers of insulin resistance, that could lead to Type 2 diabetes, so obviously the priority of that issue will be greatly heightened. If you don’t have that vulnerability then you don’t have to be as concerned about insulin resistance, and so on. But if you do have insulin resistance, or you have a high level of atherosclerosis, then it really behooves you to take important steps to get these dangerous conditions under control—which you can do. So genomic testing is not something you do by itself. It’s part of a comprehensive assessment program to know your own body—not only what you’re predisposed to, but what your body has already developed in terms of early versions of these degenerative conditions.

David: What are some of the most important nutritional supplements that you would recommend to help prevent cancer and cardiovascular disease?

Ray: We spell all that out in the book. Coenzyme Q10 is important. It never ceases to amaze me that physicians do not tell their patients to take coenzyme Q10 when they prescribe Statin drugs. This is because it’s well known that Statin drugs deplete the body of coenzyme Q10, and a lot of the side-effects such as muscle weakness that people suffer from Statin drugs are because of this depletion of coenzyme Q10. In any event, that’s an important supplement. It is involved in energy generation within the mitochondria of each cell. Disruption to the mitochondria is an important aging process and this supplement will help slow that down. Coenzyme Q10 has a number of protective effect including lowering blood pressure, helping to control free-radical damage, and protecting the heart.

A lot of research recently shows the Curcumin, which is derived from the spice turmeric, has important anti-inflammatory properties and can protect against cancer, heart disease, and even Alzheimer’s disease.

Alpha-Lipoic acid is an important antioxidant which is both water and fat-soluble. It can neutralize harmful free radicals, improve insulin sensitivity, and slow down the process of advanced Glycation end products (AGEs), which is another key aging process.

Each of the vitamins is important and plays a key role. Vitamin C is generally protective as a premier antioxidant. It appears to have particular effectiveness in preventing the early stages of atherosclerosis, namely the oxidizing of LDL cholesterol.

In terms of vitamin E, there’s been a lot of negative publicity about that, but if you look carefully at that research you’ll see that all of those studies were done with alpha-Tocopherol, and vitamin E is really a blend of eight different substances—four tocopherols and four Tocotrienols. Alpha-Tocopherol actually depletes levels of gamma-Tocopherol, and gamma-Tocopherol is the form of vitamin E that’s found naturally in food, and is a particularly important one. So we recommend that people take a blend of the fractions of vitamin E, and that they get enough gamma-Tocopherol.

There are a number of others that are important to take in general. If you have high cholesterol, Policosanol is one supplement that is quite effective, and has an independent action from the Statin drugs. Statin drugs actually are quite good. They appear to be anti-inflammatory, so they not only lower cholesterol but attack the inflammatory processes, which underlie many diseases, including atherosclerosis. But as I mentioned it’s important to take coenzyme Q10 if you’re taking Statin drugs.

There are others. Grape seed proanthocyanidin extract has been found to be another effective antioxident. Resveratrol is another. We have an extensive discussion of the most important supplements in the book.

David: What sort of suggestions would you make to someone who is looking to improve their memory or cognitive performance?

Ray: Vinpocetine, derived from the periwinkle plant, seems to have the best research. It improves cerebral blood flow, increases brain cell TP (energy) production, and enables better utilization of glucose and oxygen in the brain.

Other supplements that appear to be important for brain health include Phosphatidylserine, Acetyl-L-Carnitine, Pregneneolone, and EPA/DHA. The research appears a bit mixed on Ginkgo Biloba, but we’re not ready to give up on it.

We provide a discussion in the book of a number of smart nutrients that appear to improve brain health. There are also a number of smart drugs being developed, some of which are already in the testing pipeline, that appear to be quite promising.

David: What do you think are the primary causes of aging?

Ray: Aging is not one thing. There’s a number of different processes involved and you can adopt programs that slow down each of these. For example, one process involves the depletion of phosphatidylcholine in the cell membrane. In young people the cell membrane is about sixty or seventy percent phosphatidylcholine, and the cell membrane functions very well then—letting nutrients in and letting toxins out.

The body makes phosphatidylcholine, but very slowly, so over the decades the phosphatidylcholine in the cell membrane depletes, and the cell membrane gets filled in with inert substances, like hard fats and cholesterol, that basically don’t work. This is one reasons that cells become brittle with age. The skin in an elderly person begins to not be supple. The organs stop functioning efficiently. So it’s actually a very important aging process, and you can reverse that by supplementing with phosphatidylcholine. If you really want to do it effectively you can take phosphatidylcholine intravenously, as I do. Every week I have a I.V. with phosphatidylcholine. I also take it every day orally. So that’s one aging process we can stop today.

Another important aging process involves oxidation through positively-charged oxygen free radicals, which will steal electrons from cells, disrupting normal enzymatic processes. There are a number of different types of antioxidants that you can take to slow down that process, including vitamin C. You could take vitamin C intravenously to boost that process.

Advanced Glycation end-products, or AGEs, are involved in another aging process. This is where proteins develop cross-links with each other, therefore disrupting their function. There are supplements that you can take, such as Alpha Lipoic Acid, that slow that down. There is an experimental drug called ALT-711 (phenacyldimenthylthiazolium chloride) that can dissolve the AGE cross-links without damaging the original tissues.

Atherosclerosis is an aging process, and it’s not just taking place in the coronary arteries, of course. It can take place in the cerebral arteries, which ultimately causes cerebral strokes, but it also takes place in the arteries all throughout the body. It can lead to impotence, claudication of the legs and limbs, and like most of these processes, it’s not linear but exponential, in that it grows by a certain percentage each year.

So that’s why the process of atherosclerosis hardly seems to progress for a long time, but then when gets to a certain point it can really explode and develop very quickly. We have an extensive program on reducing atherosclerosis, which is both an aging process and a disease process. We cite a number of important supplements that reduce cholesterol and inflammation—such as the omega-3 fats EPA and DHA—as well as the Statin drugs. Supplements like Curcumin [Tumeric] are helpful.

Supplements that reduce inflammation will reduce both cancer and the inflammatory processes that lead to atherosclerosis. There are a number of supplements that reduce Homocysteine, which appears to encourage atherosclerosis. These include Folic Acid, vitamins B2, B6, and B12, magnesium, and trimethylglycine (TMG).

So you can attack atherosclerosis five or six different ways, and we recommend that you do them all, so long as there aren’t contraindications for combining treatments. But generally these treatments are independent of each other. If you go to war, you don’t just send in the helicopters. You send in the helicopters, the tanks, the planes, and the infantry. You use your intelligence resources, and attack the enemy every way that you can, with all of your resources. And that’s really what you need to do with these conditions, because they represent very threatening processes. If you are sufficient proactive, you can generally get them under control.

David: What are some of the new anti-aging treatments that you foresee coming along in the near future, like from stem cell research and therapeutic cloning?

Ray: It depends on what you mean by “near future,” because in ten or fifteen years we foresee a fundamentally transformed landscape.

David: Let’s just say prior to nanotechnology, and then that will be the next question.

Ray: is the next frontier is biotechnology. We’re really now entering an era where we can reprogram biology. We’ve sequenced the genome, and we are now reverse-engineering the genome. We’re understanding the roles that the genes play, how they express themselves in proteins, and how these proteins then play roles in sequences of biochemical steps that lead to both orderly processes as well as dysfunction—disease processes, such as atherosclerosis and cancer—and we are gaining the means to reprogram those processes.

For example, we can now turn genes off with RNA interference. This is a new technique that just emerged a few years ago—a medication with little pieces of RNA that latch on to the messenger RNA that is expressing a targeted gene and destroys it, therefore preventing the gene from expressing itself. This effectively turns the gene off. So right away that methodology has lots of applications.

Take the fat insulin receptor gene. That gene basically says ‘hold on to every calorie because the next hunting season may not work out so well.’ That was a good strategy, not only for humans, but for most species, thousands of years ago. It’s still probably a good strategy for animals living in the wild. But we’re not animals living in the wild. It was good for humans a thousand years ago when calories were few and far between. Today it underlies an epidemic of obesity. How about turning that gene off in the fat cells? What would happen?

That was actually tried in mice, and these mice ate ravenously, and they remained slim. They got the health benefits of being slim. They didn’t get diabetes. They didn’t get heart disease. They lived twenty percent longer. They got the benefits of caloric restriction while doing the opposite. So turning off the fat insulin receptor gene in fat cells is the idea. You don’t want to turn it off in muscle cells, for example. This is one methodology that could enable us to prevent obesity, and actually maintain an optimal weight no matter what we ate. So that’s one application of RNA interference.

There’s a number of genes that have been identified that promote atherosclerosis, cancer, diabetes and many other diseases. We’d like to selectively turn those genes off, and slow down or stop these disease processes. There are certain genes that appear to have an influence on the rate of aging. We can amplify the expression of genes similarly, and we can actually add new genetic information—that’s gene therapy. Gene therapy has had problems in the past, because we’ve had difficulty putting the genetic information in the right place at the right chromosome. There are new techniques now that enable us to do that correctly.

For example, you can take a cell out of the body, insert the genetic information in vitro—which is much easier to do in a Petri dish—and examine whether or not the insertion went as intended. If it ended up in the wrong place you discard it. You keep doing this until you get it right. You can examine the cell and make sure that it doesn’t have any DNA errors. So then you take this now modified cell—that has also been certified as being free of DNA errors—and it’s replicated in the Petri dish, so that hundreds of millions of copies of it are created. Then you inject these cells back into the patient, and they will work their way into the right tissues. A lung cell is not going to end up in the liver.

In fact, this was tried by a company I’m involved with, United Therapeutics. I advise them and I’m on their board. They tried this with a fatal disease called pulmonary hypertension, which is a lung disease, and these modified cells ended up in the right place—in the lungs—and actually cured pulmonary hypertension in animal tests. It has now been approved for human trials. That’s just one example of many of being able to actually add new genes. So we’ll be able to subtract genes, over-express certain genes, under-express genes, and add new genes.

Another methodology is cell transdifferentiation, a broader concept then just stem cells. One of the problems with stem cell research or stem cell approaches is this. If I want to grow a new heart, or maybe add new heart cells, because my heart has been damaged, or if I need new pancreatic Islet cells because my pancreatic Islet cells are destroyed, or need some other type of cells, I’d like it to have my DNA. The ultimate stem cell promise, the holy grail of these cell therapies, is to take my own skin cells and reprogram them to be a different kind of cell. How do you do that? Actually, all cells have the same DNA. What’s the difference between a heart cell and pancreatic Islet cell?

Well, there are certain proteins, short RNA fragments, and peptides that control gene expression. They tell the heart cells that only the certain genes which should be expressed in a heart cell are expressed. And we’re learning how to manipulate which genes are expressed. By adding certain proteins to the cell we can reprogram a skin cell to be a heart cell or a pancreatic Islet cell. This has been demonstrated in just the last couple years. So then we can create in a Petri dish as many heart cells or pancreatic Islet cells as I need, with my own DNA, because they’re derived from my cells. Then inject them, and they’ll work their way into the right tissues. In the process we can discard cells that have DNA errors, so we can basically replenish our cells with DNA-corrected cells.

While we are at it, we can also extend the telomeres. That’s another aging process. As the cells replicate, these little repeating codes of DNA called telomeres grow shorter. They’re like little beads at the end of the DNA strands. One falls off every time the cell replicates, and there’s only about fifty of them. So after a certain number of replications the cell can’t replicate anymore. There is actually one enzyme that controls this—telomerase, which is capable of extending the telomeres. Cancer actually works by creating telomerase to enable them to replicate without end. Cancer cells become immortal because they can create telomerase.

As we’re rejuvenating our cells, turning a skin cell into a kind of cell that I need, making sure that it has it’s DNA corrected, we can also extend it’s telomeres by using telomerase in the Petri dish. Then you got this new cell that’s just like my heart cells were when I was twenty. Now you can replicate that, and then inject it, and really rejuvenate all of the body’s tissues with young versions of my cells. So that’s cell rejuvenation. That’s one idea, or one technique, and there’s many different variations of that.

Then there’s turning on and off enzymes. Enzymes are the work horses of biology. Genes express themselves as enzymes, and the enzymes actually go and do the work. And we can add enzymes. We can turn enzymes off. One example of that is Torcetrapib, which destroys one enzyme, and that enzyme destroys HDL, the good cholesterol in the blood. So when people take Torcetrapib their HDL, good cholesterol levels, soar, and atherosclerosis dramatically slows down or stops. The phase 2 trials were very encouraging, and Pfizer is spending a record one billion dollars on the phase 3 trials. That’s just one example of many of these paradigm: manipulating enzymes. So there’s many different ideas to get in and very precisely reprogram the information processes that underlie biology, to undercut disease processes and aging processes, and move them towards healthy rejuvenated processes.

David: How do you see robotics, artificial intelligence, and nanotechnology affecting human health and life span in the future?

Ray: I mentioned that we talk about three bridges to radical life extension in Fantastic Voyage. Bridge One is aggressively applying today’s knowledge, and that’s, of course, a moving frontier, as we learn and gain more and more knowledge. In Chapter 10 of Fantastic Voyage I talk about my program, and at the end I mention that one part of my program is what I call a positive health slope, which means that my program is not fixed.

I spend a certain amount of time every week studying a number of things—new research, new drug developments that are coming out, new information about myself that may come from testing. Just reading the literature I might discover something that’s in fact old knowledge, but there’s so much information out there, I haven’t read everything. So I’m constantly learning more about health and medicine and my own body and modifying my own program. I probably make some small change every week. That doesn’t mean my program is unstable. My program is quite stable, but I’m fine-tuning at the edges quite frequently.

Bridge Two we’ve just been talking about, which is the biotechnology revolution. A very important insight that really changes one’s perspective is to understand that progress is exponential and not linear. So many sophisticated scientists fail to take this into consideration. They just assume that the progress is going to continue at the current pace, and they make this mistake over and over again. If you consider the exponential pace of this process, ten or fifteen years from now we will have really dramatic tools in the forms of medications and cell therapies that can reprogram our health, within the domain of biology.

Bridge Three is nanotechnology. The golden era will be in about twenty years from now. They’ll be some applications earlier, but the real Holy Grail of nanotechnology are nanobots, blood cell-size devices that can go inside the body and keep us healthy from inside. If that sounds very futuristic, I’d actually point out that we’re doing sophisticated tasks already with blood cell-size devices in animal experiments.

One scientist cured Type 1 diabetes in rats with a nano-engineered capsule that has seven nanometers pores. It lets insulin out in a controlled fashion and blocks antibodies. And that’s what is feasible today. MIT has a project of a nano-engineered device that’s actually smaller than a cell and it’s capable of detecting specifically the antigens that exist only on certain types of cancer cells. When it detects these antigens it latches onto the cell, and burrows inside the cell. It can detect once it’s inside and then at that point it releases a toxin which destroys the cancer cell. This has actually worked in the Petri dish, but that’s quite significant because there’s actually not that much that could be different in vivo as in vitro.

This is a rather sophisticated device because it’s going through these several different stages, and it can do all of these different steps. It’s a nano-engineered device in that it is created at the molecular level. So that’s what is feasible already. If you consider what I call the Law of Accelerating Returns, which is a doubling of the power of these information technologies every year, within twenty-five years these computation-communication technologies, and our understanding of biology, will be a billion times more advanced than it is today. We’re shrinking technology, according to our models, at a rate of over a hundred per 3-D volume per decade.

So these technologies will be a hundred thousand times smaller than they are today in twenty-five years, and a billion times more powerful. And look at what we can already do today experimentally. Twenty-five years from now these nanobots will be quite sophisticated. They’ll have computers in them. They’ll have communication devices. They’ll have small mechanical systems. They’ll really be little robots, and they be able to go inside the body and keep us healthy from inside. They will be able to augment the immune system by destroying pathogens. They will repair DNA errors, remove debris and reverse atherosclerosis. Whatever we don’t get around to finishing with biotechnology, we’ll be able to finish the job with these nano-engineered blood-cell sized robots or nanobots.

This really will provide radical life extension. The basic metaphor or analogy to keep in mind is to ask the question, How long does a house last? Aubrey de Grey uses this metaphor. The answer is, a house lasts as long as you want it to. If you don’t take care of it the house won’t last that long. It will fall apart. The roof will spring a leak and the house will quickly decay. On the other hand, if you’re diligent, and something goes wrong in the house you fix it. Periodically you upgrade the technology. You put in a new HVAC system and so forth. With this approach, the house will go on indefinitely, and we do have houses, in fact, that are thousands of years of old. So why doesn’t this apply to the human body?

The answer is that we understand how a house works. We understand how to fix a house. We understand all the problems a house can have, because we’ve designed them. We don’t yet have that knowledge and those tools today to do a comparable job with our body. We don’t understand all the things that could wrong, and we don’t have all the fixes for everything. But we will have this knowledge and these tools. We will have complete models of biology. We’ll reverse-engineered biology within twenty years, and we’ll have the means to go in and repair all of the problems we have identified.

We’ll be able to indefinitely fix the things that go wrong. We’ll have nanobots that can go in and proactively keep us healthy at a cellular level, without waiting until major diseases flare up, as well as stop and reverse aging processes. We’ll get to a point where people will not age. So when we talk about radical life extension we’re not talking about people growing old and becoming what we think of today as a 95 year old and then staying at a biological age 95 for hundreds of years.

We’re talking about people staying young and not aging. Actually, I’m talking about even more than that, because in addition to radical life extension, we’ll also have radical life expansion. The nanobots will be able to go inside the brain and extend our mental functioning by interacting with our biological neurons. Today we already have computers that are placed inside people’s brains, that replace diseased parts of the brain, like the neural implant for Parkinson’s disease. The latest generation of that implant allows you download new software to your neural implant from outside the patient—and that’s not an experiment, that’s an FDA approved therapy.

Today these neural implants require surgery, but ultimately we’ll be able to send these brain extenders into the nervous system noninvasively through the capillaries of the brain, without surgery. And we’ll be using them, not just to replace diseased tissue, but to go beyond our current abilities—to extend our memories, extend our pattern recognition and cognitive capabilities, and merge intimately with our technology. So we’ll have radical life expansion along with radical life extension. That’s my vision of what will happen in the next several decades.

David: What are you currently working on?

Ray: I spend maybe forty or fifty percent of my time communicating—in the form of books, articles, interviews, speeches. I give several speeches a month. Then there’s my Web site: KurzweilAI.net. We have a free daily or weekly newsletter; people can sign up by putting in their email address (which is kept in confidence) on the home page.

Then I have several businesses that I’m running, which are in the area of pattern recognition. I’ve been in the reading machine business for thirty-two years. I developed the first print-to-speech technology for the blind in 1976, and we’re introducing a new version that fits in your pocket. A blind person can take it out of their pocket, snap a picture of a handout at a meeting, a sign on a wall, the back of a cereal box, an electronic display, and the device will read it out loud to them through a earphone or speaker.

We’re developing a new medical technology, which is basically a smart undershirt that monitors your health. There will be a smart bra version for women. It takes a complete morphology EKG and monitors your breathing. So, for example, if you’re a heart patient it could tell you whether your atrial fibrillation is getting better or worse. When you’re exercising it can tell you if you’re getting a problem situation. So it gives you diagnostic information. It can also alert you if you should contact your doctor. So basically your undershirt is sending this information by Bluetooth to your cell phone, and your cell phone is running this cardiac evaluation software. So that’s another project.

Then we have Ray and Terry’s longevity products at RayandTerry.com, which goes along with Fantastic Voyage. We have about 20 products available now, and we’ll have about fifty within a few months. Basically all the things we recommend in the book will be available. We also have combinations. So, for example, if you want to lower cholesterol we have a cholesterol-lowering product, and you don’t have to buy the eight or nine different supplements separately. We put all of our recommendations together in one combination to make it easy for people to follow. There’s a total daily care, that has basic nutritional supplements, like vitamins and minerals, and coenzyme Q-10, and so on. We have a meal-replacement shake that is low carbohydrate, has no sugar, but actually tastes good, which is actually very unique, because if you’ve ever tasted a low-carb meal-replacement shake you know that there in general the taste is not desirable. This might sound promotional but that was the objective, and it’s actually made up of the nutritional supplements that we recommend. So that’s another company, and those are the companies that we’re running.

Causality

From Wikipedia, the free encyclopedia

Causality (also referred to as causation, or cause and effect) is what connects one process (the cause) with another process or state (the effect), where the first is partly responsible for the second, and the second is partly dependent on the first. In general, a process has many causes, which are said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Causality is metaphysically prior to notions of time and space.

Causality is an abstraction that indicates how the world progresses, so basic a concept that it is more apt as an explanation of other concepts of progression than as something to be explained by others more basic. The concept is like those of agency and efficacy. For this reason, a leap of intuition may be needed to grasp it.[5] Accordingly, causality is implicit in the logic and structure of ordinary language.[6]

Aristotelian philosophy uses the word "cause" to mean "explanation" or "answer to a why question", including Aristotle's material, formal, efficient, and final "causes"; then the "cause" is the explanans for the explanandum. In this case, failure to recognize that different kinds of "cause" are being considered can lead to futile debate. Of Aristotle's four explanatory modes, the one nearest to the concerns of the present article is the "efficient" one.

The topic of causality remains a staple in contemporary philosophy.

Concept

Metaphysics

The nature of cause and effect is a concern of the subject known as metaphysics.

Ontology

A general metaphysical question about cause and effect is what kind of entity can be a cause, and what kind of entity can be an effect.

One viewpoint on this question is that cause and effect are of one and the same kind of entity, with causality an asymmetric relation between them. That is to say, it would make good sense grammatically to say either "A is the cause and B the effect" or "B is the cause and A the effect", though only one of those two can be actually true. In this view, one opinion, proposed as a metaphysical principle in process philosophy, is that every cause and every effect is respectively some process, event, becoming, or happening.[7] An example is 'his tripping over the step was the cause, and his breaking his ankle the effect'. Another view is that causes and effects are 'states of affairs', with the exact natures of those entities being less restrictively defined than in process philosophy.[8]

Another viewpoint on the question is the more classical one, that a cause and its effect can be of different kinds of entity. For example, in Aristotle's efficient causal explanation, an action can be a cause while an enduring object is its effect. For example, the generative actions of his parents can be regarded as the efficient cause, with Socrates being the effect, Socrates being regarded as an enduring object, in philosophical tradition called a 'substance', as distinct from an action.

Epistemology

Since causality is a subtle metaphysical notion, considerable effort is needed to establish knowledge of it in particular empirical circumstances.

Geometrical significance

Causality has the properties of antecedence and contiguity.[9][10] These are topological, and are ingredients for space-time geometry. As developed by Alfred Robb, these properties allow the derivation of the notions of time and space.[11] Max Jammer writes "the Einstein postulate ... opens the way to a straightforward construction of the causal topology ... of Minkowski space."[12] Causal efficacy propagates no faster than light.[13]

Thus, the notion of causality is metaphysically prior to the notions of time and space. In practical terms, this is because use of the relation of causality is necessary for the interpretation of empirical experiments. Interpretation of experiments is needed to establish the physical and geometrical notions of time and space.

Necessary and sufficient causes

Causes may sometimes be distinguished into two types: necessary and sufficient.[14] A third type of causation, which requires neither necessity nor sufficiency in and of itself, but which contributes to the effect, is called a "contributory cause."
Necessary causes
If x is a necessary cause of y, then the presence of y necessarily implies the prior occurrence of x. The presence of x, however, does not imply that y will occur.[15]
Sufficient causes
If x is a sufficient cause of y, then the presence of x necessarily implies the subsequent occurrence of y. However, another cause z may alternatively cause y. Thus the presence of y does not imply the prior occurrence of x.[15]
Contributory causes
For some specific effect, in a singular case, a factor that is a contributory cause is one amongst several co-occurrent causes. It is implicit that all of them are contributory. For the specific effect, in general, there is no implication that a contributory cause is necessary, though it may be so. In general, a factor that is a contributory cause is not sufficient, because it is by definition accompanied by other causes, which would not count as causes if it were sufficient. For the specific effect, a factor that is on some occasions a contributory cause might on some other occasions be sufficient, but on those other occasions it would not be merely contributory.[16]
J. L. Mackie argues that usual talk of "cause" in fact refers to INUS conditions (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect).[17] An example is a short circuit as a cause for a house burning down. Consider the collection of events: the short circuit, the proximity of flammable material, and the absence of firefighters. Together these are unnecessary but sufficient to the house's burning down (since many other collections of events certainly could have led to the house burning down, for example shooting the house with a flamethrower in the presence of oxygen and so forth). Within this collection, the short circuit is an insufficient (since the short circuit by itself would not have caused the fire) but non-redundant (because the fire would not have happened without it, everything else being equal) part of a condition which is itself unnecessary but sufficient for the occurrence of the effect. So, the short circuit is an INUS condition for the occurrence of the house burning down.

Contrasted with conditionals

Conditional statements are not statements of causality. An important distinction is that statements of causality require the antecedent to precede or coincide with the consequent in time, whereas conditional statements do not require this temporal order. Confusion commonly arises since many different statements in English may be presented using "If ..., then ..." form (and, arguably, because this form is far more commonly used to make a statement of causality). The two types of statements are distinct, however.

For example, all of the following statements are true when interpreting "If ..., then ..." as the material conditional:
  1. If Barack Obama is president of the United States in 2011, then Germany is in Europe.
  2. If George Washington is president of the United States in 2011, then .
The first is true since both the antecedent and the consequent are true. The second is true in sentential logic and indeterminate in natural language, regardless of the consequent statement that follows, because the antecedent is false.

The ordinary indicative conditional has somewhat more structure than the material conditional. For instance, although the first is the closest, neither of the preceding two statements seems true as an ordinary indicative reading. But the sentence:
  • If Shakespeare of Stratford-on-Avon did not write Macbeth, then someone else did.
intuitively seems to be true, even though there is no straightforward causal relation in this hypothetical situation between Shakespeare's not writing Macbeth and someone else's actually writing it.

Another sort of conditional, the counterfactual conditional, has a stronger connection with causality, yet even counterfactual statements are not all examples of causality. Consider the following two statements:
  1. If A were a triangle, then A would have three sides.
  2. If switch S were thrown, then bulb B would light.
In the first case, it would not be correct to say that A's being a triangle caused it to have three sides, since the relationship between triangularity and three-sidedness is that of definition. The property of having three sides actually determines A's state as a triangle. Nonetheless, even when interpreted counterfactually, the first statement is true. An early version of Aristotle's "four cause" theory is described as recognizing "essential cause". In this version of the theory, that the closed polygon has three sides is said to be the "essential cause" of its being a triangle.[18] This use of the word 'cause' is of course now far obsolete. Nevertheless, it is within the scope of ordinary language to say that it is essential to a triangle that it has three sides.

A full grasp of the concept of conditionals is important to understanding the literature on causality. In everyday language, loose conditional statements are often enough made, and need to be interpreted carefully.

Questionable cause

Fallacies of questionable cause, also known as causal fallacies, non-causa pro causa (Latin for "non-cause for cause"), or false cause, are informal fallacies where a cause is incorrectly identified.

Theories

Counterfactual theories

Subjunctive conditionals are familiar from ordinary language. They are of the form, if A were the case, then B would be the case, or if A had been the case, then B would have been the case. Counterfactual conditionals are specifically subjunctive conditionals whose antecedents are in fact false, hence the name. However the term used technically may apply to conditionals with true antecedents as well.

Psychological research shows that people's thoughts about the causal relationships between events influences their judgments of the plausibility of counterfactual alternatives, and conversely, their counterfactual thinking about how a situation could have turned out differently changes their judgments of the causal role of events and agents. Nonetheless, their identification of the cause of an event, and their counterfactual thought about how the event could have turned out differently do not always coincide.[19] People distinguish between various sorts of causes, e.g., strong and weak causes.[20] Research in the psychology of reasoning shows that people make different sorts of inferences from different sorts of causes, as found in the fields of cognitive linguistics[21] and accident analysis[22][23] for example.

In the philosophical literature, the suggestion that causation is to be defined in terms of a counterfactual relation is made by the 18th Century Scottish philosopher David Hume. Hume remarks that we may define the relation of cause and effect such that "where, if the first object had not been, the second never had existed."[24]

More full-fledged analysis of causation in terms of counterfactual conditionals only came in the 20th Century after development of the possible world semantics for the evaluation of counterfactual conditionals. In his 1973 paper "Causation," David Lewis proposed the following definition of the notion of causal dependence:[25]
An event E causally depends on C if, and only if, (i) if C had occurred, then E would have occurred, and (ii) if C had not occurred, then E would not have occurred.
Causation is then defined as a chain of causal dependence. That is, C causes E if and only if there exists a sequence of events C, D1, D2, ... Dk, E such that each event in the sequence depends on the previous.

Note that the analysis does not purport to explain how we make causal judgements or how we reason about causation, but rather to give a metaphysical account of what it is for there to be a causal relation between some pair of events. If correct, the analysis has the power to explain certain features of causation. Knowing that causation is a matter of counterfactual dependence, we may reflect on the nature of counterfactual dependence to account for the nature of causation. For example, in his paper "Counterfactual Dependence and Time's Arrow," Lewis sought to account for the time-directedness of counterfactual dependence in terms of the semantics of the counterfactual conditional.[26] If correct, this theory can serve to explain a fundamental part of our experience, which is that we can only causally affect the future but not the past.

Probabilistic causation

Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer or emphysema. As a result, many turn to a notion of probabilistic causation. Informally, A ("The person is a smoker") probabilistically causes B ("The person has now or will have cancer at some time in the future"), if the information that A occurred increases the likelihood of Bs occurrence. Formally, P{B|A}≥ P{B} where P{B|A} is the conditional probability that B will occur given the information that A occurred, and P{B}is the probability that B will occur having no knowledge whether A did or did not occur. This intuitive condition is not adequate as a definition for probabilistic causation because of its being too general and thus not meeting our intuitive notion of cause and effect. For example, if A denotes the event "The person is a smoker," B denotes the event "The person now has or will have cancer at some time in the future" and C denotes the event "The person now has or will have emphysema some time in the future," then the following three relationships hold: P{B|A} ≥ P{B}, P{C|A} ≥ P{C} and P{B|C} ≥ P{B}. The last relationship states that knowing that the person has emphysema increases the likelihood that he will have cancer. The reason for this is that having the information that the person has emphysema increases the likelihood that the person is a smoker, thus indirectly increasing the likelihood that the person will have cancer. However, we would not want to conclude that having emphysema causes cancer. Thus, we need additional conditions such as temporal relationship of A to B and a rational explanation as to the mechanism of action. It is hard to quantify this last requirement and thus different authors prefer somewhat different definitions.

Causal calculus

When experimental interventions are infeasible or illegal, the derivation of cause effect relationship from observational studies must rest on some qualitative theoretical assumptions, for example, that symptoms do not cause diseases, usually expressed in the form of missing arrows in causal graphs such as Bayesian networks or path diagrams. The theory underlying these derivations relies on the distinction between conditional probabilities, as in P(cancer|smoking), and interventional probabilities, as in P(cancer|do(smoking)). The former reads: "the probability of finding cancer in a person known to smoke, having started, unforced by the experimenter, to do so at an unspecified time in the past", while the latter reads: "the probability of finding cancer in a person forced by the experimenter to smoke at a specified time in the past". The former is a statistical notion that can be estimated by observation with negligible intervention by the experimenter, while the latter is a causal notion which is estimated in an experiment with an important controlled randomized intervention. It is specifically characteristic of quantal phenomena that observations defined by incompatible variables always involve important intervention by the experimenter, as described quantitatively by the Heisenberg uncertainty principle.[vague] In classical thermodynamics, processes are initiated by interventions called thermodynamic operations. In other branches of science, for example astronomy, the experimenter can often observe with negligible intervention.

The theory of "causal calculus"[27] permits one to infer interventional probabilities from conditional probabilities in causal Bayesian networks with unmeasured variables. One very practical result of this theory is the characterization of confounding variables, namely, a sufficient set of variables that, if adjusted for, would yield the correct causal effect between variables of interest. It can be shown that a sufficient set for estimating the causal effect of X on Y is any set of non-descendants of X that d-separate X from Y after removing all arrows emanating from X. This criterion, called "backdoor", provides a mathematical definition of "confounding" and helps researchers identify accessible sets of variables worthy of measurement.

Structure learning

While derivations in causal calculus rely on the structure of the causal graph, parts of the causal structure can, under certain assumptions, be learned from statistical data. The basic idea goes back to Sewall Wright's 1921 work[28] on path analysis. A "recovery" algorithm was developed by Rebane and Pearl (1987)[29] which rests on Wright's distinction between the three possible types of causal substructures allowed in a directed acyclic graph (DAG):
  1. X\rightarrow Y\rightarrow Z
  2. X\leftarrow Y\rightarrow Z
  3. X\rightarrow Y\leftarrow Z
Type 1 and type 2 represent the same statistical dependencies (i.e., X and Z are independent given Y) and are, therefore, indistinguishable within purely cross-sectional data. Type 3, however, can be uniquely identified, since X and Z are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when X and Z have common ancestors, except that one must first condition on those ancestors. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed.

Alternative methods of structure learning search through the many possible causal structures among the variables, and remove ones which are strongly incompatible with the observed correlations. In general this leaves a set of possible causal relations, which should then be tested by analyzing time series data or, preferably, designing appropriately controlled experiments. In contrast with Bayesian Networks, path analysis (and its generalization, structural equation modeling), serve better to estimate a known causal effect or to test a causal model than to generate causal hypotheses.

For nonexperimental data, causal direction can often be inferred if information about time is available. This is because (according to many, though not all, theories) causes must precede their effects temporally. This can be determined by statistical time series models, for instance, or with a statistical test based on the idea of Granger causality, or by direct experimental manipulation. The use of temporal data can permit statistical tests of a pre-existing theory of causal direction. For instance, our degree of confidence in the direction and nature of causality is much greater when supported by cross-correlations, ARIMA models, or cross-spectral analysis using vector time series data than by cross-sectional data.

Derivation theories

Nobel Prize laureate Herbert A. Simon and philosopher Nicholas Rescher[33] claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics.

Manipulation theories

Some theorists have equated causality with manipulability. Under these theories, x causes y only in the case that one can change x in order to change y. This coincides with commonsense notions of causations, since often we ask causal questions in order to change some feature of the world. For instance, we are interested in knowing the causes of crime so that we might find ways of reducing it.

These theories have been criticized on two primary grounds. First, theorists complain that these accounts are circular. Attempting to reduce causal claims to manipulation requires that manipulation is more basic than causal interaction. But describing manipulations in non-causal terms has provided a substantial difficulty.

The second criticism centers around concerns of anthropocentrism. It seems to many people that causality is some existing relationship in the world that we can harness for our desires. If causality is identified with our manipulation, then this intuition is lost. In this sense, it makes humans overly central to interactions in the world.

Some attempts to defend manipulability theories are recent accounts that don't claim to reduce causality to manipulation. These accounts use manipulation as a sign or feature in causation without claiming that manipulation is more fundamental than causation.[27][38]

Process theories

Some theorists are interested in distinguishing between causal processes and non-causal processes (Russell 1948; Salmon 1984).[39][40] These theorists often want to distinguish between a process and a pseudo-process. As an example, a ball moving through the air (a process) is contrasted with the motion of a shadow (a pseudo-process). The former is causal in nature while the latter is not.

Salmon (1984)[39] claims that causal processes can be identified by their ability to transmit an alteration over space and time. An alteration of the ball (a mark by a pen, perhaps) is carried with it as the ball goes through the air. On the other hand, an alteration of the shadow (insofar as it is possible) will not be transmitted by the shadow as it moves along.

These theorists claim that the important concept for understanding causality is not causal relationships or causal interactions, but rather identifying causal processes. The former notions can then be defined in terms of causal processes.

 
Why-Because Graph of the capsizing of the Herald of Free Enterprise (click to see in detail).

Fields

Science

For the scientific investigation of efficient causality, the cause and effect are each best conceived of as temporally transient processes.

Within the conceptual frame of the scientific method, an investigator sets up several distinct and contrasting temporally transient material processes that have the structure of experiments, and records candidate material responses, normally intending to determine causality in the physical world.[41] For instance, one may want to know whether a high intake of carrots causes humans to develop the bubonic plague. The quantity of carrot intake is a process that is varied from occasion to occasion. The occurrence or non-occurrence of subsequent bubonic plague is recorded. To establish causality, the experiment must fulfill certain criteria, only one example of which is mentioned here. (There are other criteria not mentioned here.) For example, instances of the hypothesized cause must be set up to occur at a time when the hypothesized effect is relatively unlikely in the absence of the hypothesized cause; such unlikelihood is to be established by empirical evidence. A mere observation of a correlation is not nearly adequate to establish causality. In nearly all cases, establishment of causality relies on repetition of experiments and probabilistic reasoning. Hardly ever is causality established more firmly than as more or less probable. It is often most convenient for establishment of causality if the contrasting material states of affairs are fully comparable, and differ through only one variable factor, perhaps measured by a real number. Otherwise, experiments are usually difficult or impossible to interpret.

In some sciences, it is very difficult or nearly impossible to set up material states of affairs that closely test hypotheses of causality. Such sciences can in some sense be regarded as "softer".

Physics

One has to be careful in the use of the word cause in physics. Properly speaking, the hypothesized cause and the hypothesized effect are each temporally transient processes. For example, force is a useful concept for the explanation of acceleration, but force is not by itself a cause. More is needed. For example, a temporally transient process might be characterized by a definite change of force at a definite time. Such a process can be regarded as a cause. Causality is not inherently implied in equations of motion, but postulated as an additional constraint that needs to be satisfied (i.e. a cause always precedes its effect). This constraint has mathematical implications[42] such as the Kramers-Kronig relations.

Causality is one of the most fundamental and essential notions of physics.[43] Causal efficacy cannot propagate faster than light. Otherwise, reference coordinate systems could be constructed (using the Lorentz transform of special relativity) in which an observer would see an effect precede its cause (i.e. the postulate of causality would be violated).

Causal notions appear in the context of the flow of mass-energy. For example, it is commonplace to argue that causal efficacy can be propagated by waves (such as electromagnetic waves) only if they propagate no faster than light. Wave packets have group velocity and phase velocity. For waves that propagate causal efficacy, both of these must travel no faster than light. Thus light waves often propagate causal efficacy but de Broglie waves often have phase velocity faster than light and consequently cannot be propagating causal efficacy.

Causal notions are important in general relativity to the extent that the existence of an arrow of time demands that the universe's semi-Riemannian manifold be orientable, so that "future" and "past" are globally definable quantities.

Engineering

A causal system is a system with output and internal states that depends only on the current and previous input values. A system that has some dependence on input values from the future (in addition to possible past or current input values) is termed an acausal system, and a system that depends solely on future input values is an anticausal system. Acausal filters, for example, can only exist as postprocessing filters, because these filters can extract future values from a memory buffer or a file.

Biology, medicine and epidemiology

Austin Bradford Hill built upon the work of Hume and Popper and suggested in his paper "The Environment and Disease: Association or Causation?" that aspects of an association such as strength, consistency, specificity and temporality be considered in attempting to distinguish causal from noncausal associations in the epidemiological situation. (See Bradford-Hill criteria.) He did not note however, that temporality is the only necessary criterion among those aspects. Directed acyclic graphs (DAGs) are increasingly used in epidemiology to help enlighten causal thinking.[44]

Psychology

Psychologists take an empirical approach to causality, investigating how people and non-human animals detect or infer causation from sensory information, prior experience and innate knowledge.
Attribution
Attribution theory is the theory concerning how people explain individual occurrences of causation. Attribution can be external (assigning causality to an outside agent or force—claiming that some outside thing motivated the event) or internal (assigning causality to factors within the person—taking personal responsibility or accountability for one's actions and claiming that the person was directly responsible for the event). Taking causation one step further, the type of attribution a person provides influences their future behavior.

The intention behind the cause or the effect can be covered by the subject of action. See also accident; blame; intent; and responsibility.
Causal powers
Whereas David Hume argued that causes are inferred from non-causal observations, Immanuel Kant claimed that people have innate assumptions about causes. Within psychology, Patricia Cheng (1997)[45] attempted to reconcile the Humean and Kantian views. According to her power PC theory, people filter observations of events through a basic belief that causes have the power to generate (or prevent) their effects, thereby inferring specific cause-effect relations.
Causation and salience
Our view of causation depends on what we consider to be the relevant events. Another way to view the statement, "Lightning causes thunder" is to see both lightning and thunder as two perceptions of the same event, viz., an electric discharge that we perceive first visually and then aurally.
Naming and causality
David Sobel and Alison Gopnik from the Psychology Department of UC Berkeley designed a device known as the blicket detector which would turn on when an object was placed on it. Their research suggests that "even young children will easily and swiftly learn about a new causal power of an object and spontaneously use that information in classifying and naming the object."[46]
Perception of launching events
Some researchers such as Anjan Chatterjee at the University of Pennsylvania and Jonathan Fugelsang at the University of Waterloo are using neuroscience techniques to investigate the neural and psychological underpinnings of causal launching events in which one object causes another object to move. Both temporal and spatial factors can be manipulated.[47]

Statistics and economics

Statistics and economics usually employ pre-existing data or experimental data to infer causality by regression methods. The body of statistical techniques involves substantial use of regression analysis. Typically a linear relationship such as
y_{i}=a_{0}+a_{1}x_{1,i}+a_{2}x_{2,i}+...+a_{k}x_{k,i}+e_{i}
is postulated, in which y_{i} is the ith observation of the dependent variable (hypothesized to be the caused variable), x_{j,i} for j=1,...,k is the ith observation on the jth independent variable (hypothesized to be a causative variable), and e_{i} is the error term for the ith observation (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of the x_{j}s is caused by y, then estimates of the coefficients a_{j} are obtained. If the null hypothesis that a_{j}=0 is rejected, then the alternative hypothesis that a_{j}\neq 0 and equivalently that x_{j} causes y cannot be rejected. On the other hand, if the null hypothesis that a_{j}=0 cannot be rejected, then equivalently the hypothesis of no causal effect of x_{j} on y cannot be rejected. Here the notion of causality is one of contributory causality as discussed above: If the true value a_{j}\neq 0, then a change in x_{j} will result in a change in y unless some other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change in x_{j} is not sufficient to change y. Likewise, a change in x_{j} is not necessary to change y, because a change in y could be caused by something implicit in the error term (or by some other causative explanatory variable included in the model).

The above way of testing for causality requires belief that there is no reverse causation, in which y would cause x_{j}. This belief can be established in one of several ways. First, the variable x_{j} may be a non-economic variable: for example, if rainfall amount x_{j} is hypothesized to affect the futures price y of some agricultural commodity, it is impossible that in fact the futures price affects rainfall amount (provided that cloud seeding is never attempted). Second, the instrumental variables technique may be employed to remove any reverse causation by introducing a role for other variables (instruments) that are known to be unaffected by the dependent variable. Third, the principle that effects cannot precede causes can be invoked, by including on the right side of the regression only variables that precede in time the dependent variable; this principle is invoked, for example, in testing for Granger causality and in its multivariate analog, vector autoregression, both of which control for lagged values of the dependent variable while testing for causal effects of lagged independent variables.

Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid false inferences of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as an indirect effect through the potentially causative variable of interest.

Given the above procedures, coincidental (as opposed to causal) correlation can be probabilistically rejected if data samples are large and if regression results pass cross validation tests showing that the correlations hold even for data that were not used in the regression.

Metaphysics

The deterministic world-view holds that the history of the universe can be exhaustively represented as a progression of events following one after as cause and effect.[10] The incompatibilist version of this holds that there is no such thing as "free will". Compatibilism, on the other hand, holds that determinism is compatible with, or even necessary for, free will.[48]

Management

Used in management and engineering, an Ishikawa diagram shows the factors that cause the effect. Smaller arrows connect the sub-causes to major causes.

For quality control in manufacturing in the 1960s, Kaoru Ishikawa developed a cause and effect diagram, known as an Ishikawa diagram or fishbone diagram. The diagram categorizes causes, such as into the six main categories shown here. These categories are then sub-divided. Ishikawa's method identifies "causes" in brainstorming sessions conducted among various groups involved in the manufacturing process. These groups can then be labeled as categories in the diagrams. The use of these diagrams has now spread beyond quality control, and they are used in other areas of management and in design and engineering. Ishikawa diagrams have been criticized for failing to make the distinction between necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this distinction.[49]

Humanities

History

In the discussion of history, events are sometimes considered as if in some way being agents that can then bring about other historical events. Thus, the combination of poor harvests, the hardships of the peasants, high taxes, lack of representation of the people, and kingly ineptitude are among the causes of the French Revolution. This is a somewhat Platonic and Hegelian view that reifies causes as ontological entities. In Aristotelian terminology, this use approximates to the case of the efficient cause.

Some philosophers of history such as Arthur Danto have claimed that "explanations in history and elsewhere" describe "not simply an event—something that happens—but a change".[50] Like many practicing historians, they treat causes as intersecting actions and sets of actions which bring about "larger changes", in Danto’s words: to decide "what are the elements which persist through a change" is "rather simple" when treating an individual’s "shift in attitude", but "it is considerably more complex and metaphysically challenging when we are interested in such a change as, say, the break-up of feudalism or the emergence of nationalism".[51]

Much of the historical debate about causes has focused on the relationship between communicative and other actions, between singular and repeated ones, and between actions, structures of action or group and institutional contexts and wider sets of conditions.[52] John Gaddis has distinguished between exceptional and general causes (following Marc Bloch) and between "routine" and "distinctive links" in causal relationships: "in accounting for what happened at Hiroshima on August 6, 1945, we attach greater importance to the fact that President Truman ordered the dropping of an atomic bomb than to the decision of the Army Air Force to carry out his orders."[53] He has also pointed to the difference between immediate, intermediate and distant causes.[54] For his part, Christopher Lloyd puts forward four "general concepts of causation" used in history: the "metaphysical idealist concept, which asserts that the phenomena of the universe are products of or emanations from an omnipotent being or such final cause"; "the empiricist (or Humean) regularity concept, which is based on the idea of causation being a matter of constant conjunctions of events"; "the functional/teleological/consequential concept", which is "goal-directed, so that goals are causes"; and the "realist, structurist and dispositional approach, which sees relational structures and internal dispositions as the causes of phenomena".[55]

Law

According to law and jurisprudence, legal cause must be demonstrated to hold a defendant liable for a crime or a tort (i.e. a civil wrong such as negligence or trespass). It must be proven that causality, or a "sufficient causal link" relates the defendant's actions to the criminal event or damage in question. Causation is also an essential legal element that must be proven to qualify for remedy measures under international trade law.[56]

Theology

Note the concept of omnicausality in Abrahamic theology, which is the belief that God has set in motion all events at the dawn of time; he is the determiner and the cause of all things. It is therefore an attempt to rectify the apparent incompatibility between determinism and the existence of an omnipotent god.[57]

History

Hindu philosophy

Vedic period (c. 1750–500 BCE) literature has karma's Eastern origins.[58] Karma is the belief held by Sanathana Dharma and major religions that a person's actions cause certain effects in the current life and/or in future life, positively or negatively. The various philosophical schools (darsanas) provide different accounts of the subject. The doctrine of satkaryavada affirms that the effect inheres in the cause in some way. The effect is thus either a real or apparent modification of the cause. The doctrine of asatkaryavada affirms that the effect does not inhere in the cause, but is a new arising.

Bhagavad-gītā 18.14 identifies five causes for any action (knowing which it can be perfected): the body, the individual soul, the senses, the efforts and the supersoul.

According to Monier-Williams, in the Nyāya causation theory from Sutra I.2.I,2 in the Vaisheshika philosophy, from causal non-existence is effectual non-existence; but, not effectual non-existence from causal non-existence. A cause precedes an effect. With a threads and cloth metaphors, three causes are:
  1. Co-inherence cause: resulting from substantial contact, 'substantial causes', threads are substantial to cloth, corresponding to Aristotle's material cause.
  2. Non-substantial cause: Methods putting threads into cloth, corresponding to Aristotle's formal cause.
  3. Instrumental cause: Tools to make the cloth, corresponding to Aristotle's efficient cause.
Monier-Williams also proposed that Aristotle's and the Nyaya's causality are considered conditional aggregates necessary to man's productive work.[60]

Buddhist philosophy

The general or universal definition of pratityasamutpada (or "dependent origination" or "dependent arising" or "interdependent co-arising") is that everything arises in dependence upon multiple causes and conditions; nothing exists as a singular, independent entity. A traditional example in Buddhist texts is of three sticks standing upright and leaning against each other and supporting each other. If one stick is taken away, the other two will fall to the ground.

Causality in the Chittamatrin buddhist school approach, Asanga's (c. 400 CE) mind-only Buddhist school, asserts that objects cause consciousness in the mind's image. Because causes precede effects, which must be different entities, then subject and object are different. For this school, there are no objects which are entities external to a perceiving consciousness. The Chittamatrin and the Yogachara Svatantrika schools accept that there are no objects external to the observer's causality. This largely follows the Nikayas approach.[61][62][63][64]

The Abhidharmakośakārikā approach is Vasubandhu's Abhidharma commentary text in the Sarvāstivāda school (c. 500 CE). It has four intricate causal conditioning constructions with the: 1) root cause, 2) immediate antecedent, 3) object support, and 4) predominance. Then, the six causes are: 1) instrumentality (kāraṇahetu), deemed the primary factor in result production; 2) simultaneity or coexistence, which connects phenomena that arise simultaneously; 3) homogeneity, explaining the homogenous flow that evokes phenomena continuity; 4) association, which operates only between mental factors and explains why consciousness appears as assemblages to mental factors; 5) dominance, which forms one's habitual cognitive and behaviorist dispositions; and 6) fruition, referring to whatever is the actively wholesome or unwholesome result. The four conditions and six causes interact with each other in explaining phenomenal experience: for instance, each conscious moment acts both as the homogenous cause, as well as the immediate antecedent consciousness condition rise, and its concomitants, in a subsequent moment.[citation needed]

The Vaibhashika (c. 500 CE) is an early buddhist school which favors direct object contact and accepts simultaneous cause and effects. This is based in the consciousness example which says, intentions and feelings are mutually accompanying mental factors that support each other like poles in tripod. In contrast, simultaneous cause and effect rejectors say that if the effect already exists, then it cannot effect the same way again. How past, present and future are accepted is a basis for various Buddhist school's causality view points.[65][66][67]

All the classic Buddhist schools teach karma. "The law of karma is a special instance of the law of cause and effect, according to which all our actions of body, speech, and mind are causes and all our experiences are their effects."[68]

The Baha'i concept of causation has been a unifying force for this young religion. The belief in a common biological and ideological ancestry has made it possible for Baha'is to recognize Buddha, Moses, Jesus and Muhammad. Unfortunately, this has led to the systematic persecution of Baha'is by many caliphates.[69]

Western philosophy

Aristotelian

Aristotle identified four kinds of answer or explanatory mode to various "Why?" questions. He thought that, for any given topic, all four kinds of explanatory mode were important, each in its own right. As a result of traditional specialized philosophical peculiarities of language, with translations between ancient Greek, Latin, and English, the word 'cause' is nowadays in specialized philosophical writings used to label Aristotle's four kinds.[18][70] In ordinary language, there are various meanings of the word cause, the commonest referring to efficient cause, the topic of the present article.
  • Material cause, the material whence a thing has come or that which persists while it changes, as for example, one's mother or the bronze of a statue (see also substance theory).[71]
  • Formal cause, whereby a thing's dynamic form or static shape determines the thing's properties and function, as a human differs from a statue of a human or as a statue differs from a lump of bronze.[72]
  • Efficient cause, which imparts the first relevant movement, as a human lifts a rock or raises a statue. This is the main topic of the present article.
  • Final cause, the criterion of completion, or the end; it may refer to an action or to an inanimate process. Examples: Socrates takes a walk after dinner for the sake of his health; earth falls to the lowest level because that is its nature.
Of Aristotle's four kinds or explanatory modes, only one, the 'efficient cause' is a cause as defined in the leading paragraph of this present article. The other three explanatory modes might be rendered material composition, structure and dynamics, and, again, criterion of completion. The word that Aristotle used was αἰτία. For the present purpose, that Greek word would be better translated as "explanation" than as "cause" as those words are most often used in current English. Another translation of Aristotle is that he meant "the four Becauses" as four kinds of answer to "why" questions.[18]

Aristotle assumed efficient causality as referring to a basic fact of experience, not explicable by, or reducible to, anything more fundamental or basic.

In some works of Aristotle, the four causes are listed as (1) the essential cause, (2) the logical ground, (3) the moving cause, and (4) the final cause. In this listing, a statement of essential cause is a demonstration that an indicated object conforms to a definition of the word that refers to it. A statement of logical ground is an argument as to why an object statement is true. These are further examples of the idea that a "cause" in general in the context of Aristotle's usage is an "explanation".[18]

The word "efficient" used here can also be translated from Aristotle as "moving" or "initiating".[18]

Efficient causation was connected with Aristotelian physics, which recognized the four elements (earth, air, fire, water), and added the fifth element (aether). Water and earth by their intrinsic property gravitas or heaviness intrinsically fall toward, whereas air and fire by their intrinsic property levitas or lightness intrinsically rise away from, Earth's center—the motionless center of the universe—in a straight line while accelerating during the substance's approach to its natural place.

As air remained on Earth, however, and did not escape Earth while eventually achieving infinite speed—an absurdity—Aristotle inferred that the universe is finite in size and contains an invisible substance that held planet Earth and its atmosphere, the sublunary sphere, centered in the universe. And since celestial bodies exhibit perpetual, unaccelerated motion orbiting planet Earth in unchanging relations, Aristotle inferred that the fifth element, aither, that fills space and composes celestial bodies intrinsically moves in perpetual circles, the only constant motion between two points. (An object traveling a straight line from point A to B and back must stop at either point before returning to the other.)

Left to itself, a thing exhibits natural motion, but can—according to Aristotelian metaphysics—exhibit enforced motion imparted by an efficient cause. The form of plants endows plants with the processes nutrition and reproduction, the form of animals adds locomotion, and the form of humankind adds reason atop these. A rock normally exhibits natural motion—explained by the rock's material cause of being composed of the element earth—but a living thing can lift the rock, an enforced motion diverting the rock from its natural place and natural motion. As a further kind of explanation, Aristotle identified the final cause, specifying a purpose or criterion of completion in light of which something should be understood.

Aristotle himself explained,
Cause means
(a) in one sense, that as the result of whose presence something comes into being—e.g., the bronze of a statue and the silver of a cup, and the classes which contain these [i.e., the material cause];
(b) in another sense, the form or pattern; that is, the essential formula and the classes which contain it—e.g. the ratio 2:1 and number in general is the cause of the octave—and the parts of the formula [i.e., the formal cause].
(c) The source of the first beginning of change or rest; e.g. the man who plans is a cause, and the father is the cause of the child, and in general that which produces is the cause of that which is produced, and that which changes of that which is changed [i.e., the efficient cause].
(d) The same as "end"; i.e. the final cause; e.g., as the "end" of walking is health. For why does a man walk? "To be healthy", we say, and by saying this we consider that we have supplied the cause [the final cause].
(e) All those means towards the end which arise at the instigation of something else, as, e.g., fat-reducing, purging, drugs and instruments are causes of health; for they all have the end as their object, although they differ from each other as being some instruments, others actions [i.e., necessary conditions].
— Metaphysics, Book 5, section 1013a, translated by Hugh Tredennick[73]
Aristotle further discerned two modes of causation: proper (prior) causation and accidental (chance) causation. All causes, proper and accidental, can be spoken as potential or as actual, particular or generic. The same language refers to the effects of causes, so that generic effects are assigned to generic causes, particular effects to particular causes, and actual effects to operating causes.

Averting infinite regress, Aristotle inferred the first mover—an unmoved mover. The first mover's motion, too, must have been caused, but, being an unmoved mover, must have moved only toward a particular goal or desire.

Middle Ages

In line with Aristotelian cosmology, Thomas Aquinas posed a hierarchy prioritizing Aristotle's four causes: "final > efficient > material > formal".[74] Aquinas sought to identify the first efficient cause—now simply first cause—as everyone would agree, said Aquinas, to call it God. Later in the Middle Ages, many scholars conceded that the first cause was God, but explained that many earthly events occur within God's design or plan, and thereby scholars sought freedom to investigate the numerous secondary causes.

After the Middle Ages

For Aristotelian philosophy before Aquinas, the word cause had a broad meaning. It meant 'answer to a why question' or 'explanation', and Aristotelian scholars recognized four kinds of such answers. With the end of the Middle Ages, in many philosophical usages, the meaning of the word 'cause' narrowed. It often lost that broad meaning, and was restricted to just one of the four kinds. For authors such as Niccolò Machiavelli, in the field of political thinking, and Francis Bacon, concerning science more generally, Aristotle's moving cause was the focus of their interest. A widely used modern definition of causality in this newly narrowed sense was assumed by David Hume.[74] He undertook an epistemological and metaphysical investigation of the notion of moving cause. He denied that we can ever perceive cause and effect, except by developing a habit or custom of mind where we come to associate two types of object or event, always contiguous and occurring one after the other.[75] In Part III, section XV of his book A Treatise of Human Nature, Hume expanded this to a list of eight ways of judging whether two things might be cause and effect. The first three:
1. "The cause and effect must be contiguous in space and time."
2. "The cause must be prior to the effect."
3. "There must be a constant union betwixt the cause and effect. 'Tis chiefly this quality, that constitutes the relation."
And then additionally there are three connected criteria which come from our experience and which are "the source of most of our philosophical reasonings":
4. "The same cause always produces the same effect, and the same effect never arises but from the same cause. This principle we derive from experience, and is the source of most of our philosophical reasonings."
5. Hanging upon the above, Hume says that "where several different objects produce the same effect, it must be by means of some quality, which we discover to be common amongst them."
6. And "founded on the same reason": "The difference in the effects of two resembling objects must proceed from that particular, in which they differ."
And then two more:
7. "When any object increases or diminishes with the increase or diminution of its cause, 'tis to be regarded as a compounded effect, deriv'd from the union of the several different effects, which arise from the several different parts of the cause."
8. An "object, which exists for any time in its full perfection without any effect, is not the sole cause of that effect, but requires to be assisted by some other principle, which may forward its influence and operation."
In 1949, physicist Max Born distinguished determination from causality. For him, determination meant that actual events are so linked by laws of nature that certainly reliable predictions and retrodictions can be made from sufficient present data about them. For him, there are two kinds of causation, which we may here call nomic or generic causation, and singular causation. Nomic causality means that cause and effect are linked by more or less certain or probabilistic general laws covering many possible or potential instances; we may recognize this as a probabilized version of criterion 3. of Hume mentioned just above. An occasion of singular causation is a particular occurrence of a definite complex of events that are physically linked by antecedence and contiguity, which we may here recognize as criteria 1. and 2. of Hume mentioned just above.[9]

Reproductive rights

From Wikipedia, the free encyclo...