Search This Blog

Thursday, August 2, 2018

Tabula rasa

From Wikipedia, the free encyclopedia
Roman tabula or wax tablet with stylus.

Tabula rasa (/ˈtæbjələ ˈrɑːsə, -zə, ˈr-/) refers to the epistemological idea that individuals are born without built-in mental content and that therefore all knowledge comes from experience or perception. Proponents of tabula rasa generally disagree with the doctrine of innatism which holds that the mind is born already in possession of certain knowledge. Generally, proponents of the tabula rasa theory also favour the "nurture" side of the nature versus nurture debate when it comes to aspects of one's personality, social and emotional behaviour, knowledge and sapience.

History

Tabula rasa is a Latin phrase often translated as "blank slate" in English and originates from the Roman tabula used for notes, which was blanked by heating the wax and then smoothing it.[1] This roughly equates to the English term "blank slate" (or, more literally, "erased slate") which refers to the emptiness of a slate prior to it being written on with chalk. Both may be renewed repeatedly, by melting the wax of the tablet or by erasing the chalk on the slate.

Philosophy

In Western philosophy, the concept of tabula rasa can be traced back to the writings of Aristotle who writes in his treatise "Περί Ψυχῆς" (De Anima or On the Soul) of the "unscribed tablet." In one of the more well-known passages of this treatise he writes that:
Haven't we already disposed of the difficulty about interaction involving a common element, when we said that mind is in a sense potentially whatever is thinkable, though actually it is nothing until it has thought? What it thinks must be in it just as characters may be said to be on a writing-tablet on which as yet nothing stands written: this is exactly what happens with mind.[2]
This idea was further developed in Ancient Greek philosophy by the Stoic school. Stoic epistemology emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it.[3] The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon."[4] Diogenes Laërtius attributes a similar belief to the Stoic Zeno of Citium when he writes in Lives and Opinions of Eminent Philosophers that:
Perception, again, is an impression produced on the mind, its name being appropriately borrowed from impressions on wax made by a seal; and perception they divide into, comprehensible and incomprehensible: Comprehensible, which they call the criterion of facts, and which is produced by a real object, and is, therefore, at the same time conformable to that object; Incomprehensible, which has no relation to any real object, or else, if it has any such relation, does not correspond to it, being but a vague and indistinct representation.[5]
In the eleventh century, the theory of tabula rasa was developed more clearly by the Persian philosopher Avicenna (Ibn Sina in Arabic). He argued that the "...human intellect at birth resembled a tabula rasa, a pure potentiality that is actualized through education and comes to know," and that knowledge is attained through "...empirical familiarity with objects in this world from which one abstracts universal concepts," which develops through a "...syllogistic method of reasoning; observations lead to propositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "...possesses levels of development from the static/material intellect (al-‘aql al-hayulani), that potentiality can acquire knowledge to the active intellect (al-‘aql al-fa‘il), the state of the human intellect at conjunction with the perfect source of knowledge."[6]

In the twelfth century, the Andalusian-Islamic philosopher and novelist, Ibn Tufail, known as "Abubacer" or "Ebn Tophail" in the West, demonstrated the theory of tabula rasa as a thought experiment through his Arabic philosophical novel, Hayy ibn Yaqzan, in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding.[7]
 

In the thirteenth century, St. Thomas Aquinas brought the Aristotelian and Avicennian notions to the forefront of Christian thought. These notions sharply contrasted with the previously held Platonic notions of the human mind as an entity that preexisted somewhere in the heavens, before being sent down to join a body here on Earth (see Plato's Phaedo and Apology, as well as others). St. Bonaventure (also thirteenth century) was one of the fiercest intellectual opponents of Aquinas, offering some of the strongest arguments toward the Platonic idea of the mind.

The writings of Avicenna, Ibn Tufail, and Aquinas on the tabula rasa theory stood unprogressed and untested for several centuries.[citation needed] For example, the late medieval English jurist Sir John Fortescue, in his work In Praise of the Laws of England (Chapter VI), takes for granted the notion of tabula rasa, stressing it as the basis of the need for the education of the young in general, and of young princes specifically. "Therefore, Prince, whilst you are young and your mind is as it were a clean slate, impress on it these things, lest in future it be impressed more pleasurably with images of lesser worth." (His igitur, Princeps, dum Adolescens es, et Anima tua velut Tabula rasa, depinge eam, ne in futurum ipsa Figuris minoris Frugi delectabilius depingatur.)

The modern idea of the theory, however, is attributed mostly to John Locke's expression of the idea in Essay Concerning Human Understanding (he uses the term "white paper" in Book II, Chap. I, 2). In Locke's philosophy, tabula rasa was the theory that at birth the (human) mind is a "blank slate" without rules for processing data, and that data is added and rules for processing are formed solely by one's sensory experiences. The notion is central to Lockean empiricism. As understood by Locke, tabula rasa meant that the mind of the individual was born blank, and it also emphasized the freedom of individuals to author their own soul. Individuals are free to define the content of their character—but basic identity as a member of the human species cannot be altered. This presumption of a free, self-authored mind combined with an immutable human nature leads to the Lockean doctrine of "natural" rights. Locke's idea of tabula rasa is frequently compared with Thomas Hobbes's viewpoint of human nature, in which humans are endowed with inherent mental content—particularly with selfishness.[citation needed]

The eighteenth-century Swiss philosopher Jean-Jacques Rousseau used tabula rasa to support his argument that warfare is an advent of society and agriculture, rather than something that occurs from the human state of nature. Since tabula rasa states that humans are born with a "blank-slate", Rousseau uses this to suggest that humans must learn warfare.

Tabula rasa also features in Sigmund Freud's psychoanalysis. Freud depicted personality traits as being formed by family dynamics (see Oedipus complex). Freud's theories imply that humans lack free will, but also that genetic influences on human personality are minimal. In Freudian psychoanalysis, one is largely determined by one's upbringing.[citation needed]

The tabula rasa concept became popular in social sciences during the twentieth century. Early ideas of eugenics posited that human intelligence correlated strongly with social class, but these ideas were rejected, and the idea that genes (or simply "blood") determined a person's character became regarded as racist. By the 1970s, scientists such as John Money had come to see gender identity as socially constructed, rather than rooted in genetics.

Science

Psychology and neurobiology

Psychologists and neurobiologists have shown evidence that initially, the entire cerebral cortex is programmed and organized to process sensory input, control motor actions, regulate emotion, and respond reflexively (under predetermined conditions).[8] These programmed mechanisms in the brain subsequently act to learn and refine the ability of the organism.[9][10] For example, psychologist Steven Pinker showed that—in contrast to written language—the brain is "programmed" to pick up spoken language spontaneously.[11]
There have been claims by a minority in psychology and neurobiology, however, that the brain is tabula rasa only for certain behaviours. For instance, with respect to one's ability to acquire both general and special types of knowledge or skills, Howe argued against the existence of innate talent.[12] There also have been neurological investigations into specific learning and memory functions, such as Karl Lashley's study on mass action and serial interaction mechanisms.

Important evidence against the tabula rasa model of the mind comes from behavioural genetics, especially twin and adoption studies (see below). These indicate strong genetic influences on personal characteristics such as IQ, alcoholism, gender identity, and other traits.[11] Critically, multivariate studies show that the distinct faculties of the mind, such as memory and reason, fractionate along genetic boundaries. Cultural universals such as emotion and the relative resilience of psychological adaptation to accidental biological changes (for instance the David Reimer case of gender reassignment following an accident) also support basic biological mechanisms in the mind.[13]

Social pre-wiring

Twin studies have resulted in important evidence against the tabula rasa model of the mind, specifically, of social behaviour.

The social pre-wiring hypothesis refers to the ontogeny of social interaction. Also informally referred to as, "wired to be social." The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social[14].

Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behaviour. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behaviour cannot be contributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behaviour and identity through genetics[14].

Principal evidence of this theory is uncovered by examining twin pregnancies. The main argument is, if there are social behaviours that are inherited and developed before birth, then one should expect twin fetuses to engage in some form of social interaction before they are born. Thus, ten fetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin fetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins were not accidental but specifically aimed[14].

The social pre-wiring hypothesis was proved correct, "The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin fetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behaviour: when the context enables it, as in the case of twin fetuses, other-directed actions are not only possible but predominant over self-directed actions."[14].

Computer science

In computer science, tabula rasa refers to the development of autonomous agents with a mechanism to reason and plan toward their goal, but no "built-in" knowledge-base of their environment. Thus they truly are a blank slate.

In reality autonomous agents possess an initial data-set or knowledge-base, but this cannot be immutable or it would hamper autonomy and heuristic ability.[citation needed] Even if the data-set is empty, it usually may be argued that there is a built-in bias in the reasoning and planning mechanisms.[citation needed] Either intentionally or unintentionally placed there by the human designer, it thus negates the true spirit of tabula rasa.[15]

A synthetic (programming) language parser (LR(1), LALR(1) or SLR(1), for example) could be considered a special case of a tabula rasa, as it is designed to accept any of a possibly infinite set of source language programs, within a single programming language, and to output either a good parse of the program, or a good machine language translation of the program, either of which represents a success, or, alternately, a failure, and nothing else. The "initial data-set" is a set of tables which are generally produced mechanically by a parser table generator, usually from a BNF representation of the source language, and represents a "table representation" of that single programming language.

Radical new vertically integrated 3D chip design combines computing and data storage

Aims to process and store massive amounts of data at ultra-high speed in the future.
July 7, 2017
Original link:  http://www.kurzweilai.net/radical-new-vertically-integrated-3d-chip-design-combines-computing-and-data-storage
Four vertical layers in new 3D nanosystem chip. Top (fourth layer): sensors and more than one million carbon-nanotube field-effect transistor (CNFET) logic inverters; third layer, on-chip non-volatile RRAM (1 Mbit memory); second layer, CNFET logic with classification accelerator (to identify sensor inputs); first (bottom) layer, silicon FET logic. (credit: Max M. Shulaker et al./Nature)

A radical new 3D chip that combines computation and data storage in vertically stacked layers — allowing for processing and storing massive amounts of data at high speed in future transformative nanosystems — has been designed by researchers at Stanford University and MIT.

The new 3D-chip design* replaces silicon with carbon nanotubes (sheets of 2-D graphene formed into nanocylinders) and integrates resistive random-access memory (RRAM) cells.

Carbon-nanotube field-effect transistors (CNFETs) are an emerging transistor technology that can scale beyond the limits of silicon MOSFETs (conventional chips), and promise an order-of-magnitude improvement in energy-efficient computation. However, experimental demonstrations of CNFETs so far have been small-scale and limited to integrating only tens or hundreds of devices (see earlier 2015 Stanford research, “Skyscraper-style carbon-nanotube chip design…”).

The researchers integrated more than 1 million RRAM cells and 2 million carbon-nanotube field-effect transistors in the chip, making it the most complex nanoelectronic system ever made with emerging nanotechnologies, according to the researchers. RRAM is an emerging memory technology that promises high-capacity, non-volatile data storage, with improved speed, energy efficiency, and density, compared to dynamic random-access memory (DRAM).

Instead of requiring separate components, the RRAM cells and carbon nanotubes are built vertically over one another, creating a dense new 3D computer architecture** with interleaving layers of logic and memory. By using ultradense through-chip vias (electrical interconnecting wires passing between layers), the high delay with conventional wiring between computer components is eliminated.

The new 3D nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce “highly processed” information. “Such complex nanoelectronic systems will be essential for future high-performance, highly energy-efficient electronic systems,” the researchers say.

How to combine computation and storage

Illustration of separate CPU (bottom) and RAM memory (top) in current computer architecture (images credit: iStock)

The new chip design aims to replace current chip designs, which separate computing and data storage, resulting in limited-speed connections.

Separate 2D chips have been required because “building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” explains lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT and lead author of a paper published July 5, 2017 in the journal Nature. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

Instead, carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures: below 200 C. “This means they can be built up in layers without harming the circuits beneath,” says Shulaker.

Overcoming communication and computing bottlenecks

As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on increasingly miniaturized chips, there is not enough room to place chips side-by-side.

At the same time, embedded intelligence in areas ranging from autonomous driving to personalized medicine is now generating huge amounts of data, but silicon transistors are no longer improving at the historic rate that they have for decades.

Instead, three-dimensional integration is the most promising approach to continue the technology-scaling path set forth by Moore’s law, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

Three-dimensional integration “leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” he says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

The new 3D design provides several benefits for future computing systems, including:
  • Logic circuits made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon.
  • RRAM memory is denser, faster, and more energy-efficient compared to conventional DRAM (dynamic random-access memory) devices.
  • The dense through-chip vias (wires) can enable vertical connectivity that is 1,000 times more dense than conventional packaging and chip-stacking solutions allow, which greatly improves the data communication bandwidth between vertically stacked functional layers. For example, each sensor in the top layer can connect directly to its respective underlying memory cell with an inter-layer via. This enables the sensors to write their data in parallel directly into memory and at high speed.
  • The design is compatible in both fabrication and design with today’s CMOS silicon infrastructure.
Shulaker next plans to work with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system.

This work was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

* As a working-prototype demonstration of the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip, they placed more than 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases for detecting signs of disease by sensing particular compounds in a patient’s breath, says Shulaker. By layering sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth in just one device, according to Shulaker. The top layer could be replaced with additional computation or data storage subsystems, or with other forms of input/output, he explains.

** Previous R&D in 3D chip technologies and their limitations are covered here, noting that “in general, 3D integration is a broad term that includes such technologies as 3D wafer-level packaging (3DWLP); 2.5D and 3D interposer-based integration; 3D stacked ICs (3D-SICs), monolithic 3D ICs; 3D heterogeneous integration; and 3D systems integration.” The new Stanford-MIT nanosystem design significantly expands this definition.



Abstract of Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

Rationalism

From Wikipedia, the free encyclopedia

Baruch Spinoza, 17th-century Dutch philosopher and forerunner to the Age of Enlightenment, is regarded as one of the most influential rationalists of all time.[1][2]
 










In philosophy, rationalism is the epistemological view that "regards reason as the chief source and test of knowledge" or "any view appealing to reason as a source of knowledge or justification". More formally, rationalism is defined as a methodology or a theory "in which the criterion of the truth is not sensory but intellectual and deductive".

In an old controversy, rationalism was opposed to empiricism, where the rationalists believed that reality has an intrinsically logical structure. Because of this, the rationalists argued that certain truths exist and that the intellect can directly grasp these truths. That is to say, rationalists asserted that certain rational principles exist in logic, mathematics, ethics, and metaphysics that are so fundamentally true that denying them causes one to fall into contradiction. The rationalists had such a high confidence in reason that empirical proof and physical evidence were regarded as unnecessary to ascertain certain truths – in other words, "there are significant ways in which our concepts and knowledge are gained independently of sense experience".[6]

Different degrees of emphasis on this method or theory lead to a range of rationalist standpoints, from the moderate position "that reason has precedence over other ways of acquiring knowledge" to the more extreme position that reason is "the unique path to knowledge".[7] Given a pre-modern understanding of reason, rationalism is identical to philosophy, the Socratic life of inquiry, or the zetetic (skeptical) clear interpretation of authority (open to the underlying or essential cause of things as they appear to our sense of certainty). In recent decades, Leo Strauss sought to revive "Classical Political Rationalism" as a discipline that understands the task of reasoning, not as foundational, but as maieutic.

In politics, rationalism, since the Enlightenment, historically emphasized a "politics of reason" centered upon rational choice, utilitarianism, secularism, and irreligion[8] – the latter aspect's antitheism later softened by politic adoption of pluralistic rationalist methods practicable regardless of religious or irreligious ideology.[9]

In this regard, the philosopher John Cottingham[10] noted how rationalism, a methodology, became socially conflated with atheism, a worldview:
In the past, particularly in the 17th and 18th centuries, the term 'rationalist' was often used to refer to free thinkers of an anti-clerical and anti-religious outlook, and for a time the word acquired a distinctly pejorative force (thus in 1670 Sanderson spoke disparagingly of 'a mere rationalist, that is to say in plain English an atheist of the late edition...'). The use of the label 'rationalist' to characterize a world outlook which has no place for the supernatural is becoming less popular today; terms like 'humanist' or 'materialist' seem largely to have taken its place. But the old usage still survives.

Philosophical usage

Rationalism is often contrasted with empiricism. Taken very broadly these views are not mutually exclusive, since a philosopher can be both rationalist and empiricist.[4] Taken to extremes, the empiricist view holds that all ideas come to us a posteriori, that is to say, through experience; either through the external senses or through such inner sensations as pain and gratification. The empiricist essentially believes that knowledge is based on or derived directly from experience. The rationalist believes we come to knowledge a priori – through the use of logic – and is thus independent of sensory experience. In other words, as Galen Strawson once wrote, "you can see that it is true just lying on your couch. You don't have to get up off your couch and go outside and examine the way things are in the physical world. You don't have to do any science."[11] Between both philosophies, the issue at hand is the fundamental source of human knowledge and the proper techniques for verifying what we think we know. Whereas both philosophies are under the umbrella of epistemology, their argument lies in the understanding of the warrant, which is under the wider epistemic umbrella of the theory of justification.

Theory of justification

The theory of justification is the part of epistemology that attempts to understand the justification of propositions and beliefs. Epistemologists are concerned with various epistemic features of belief, which include the ideas of justification, warrant, rationality, and probability. Of these four terms, the term that has been most widely used and discussed by the early 21st century is "warrant". Loosely speaking, justification is the reason that someone (probably) holds a belief.
If "A" makes a claim, and "B" then casts doubt on it, "A"'s next move would normally be to provide justification. The precise method one uses to provide justification is where the lines are drawn between rationalism and empiricism (among other philosophical views). Much of the debate in these fields are focused on analyzing the nature of knowledge and how it relates to connected notions such as truth, belief, and justification.

Thesis of rationalism

At its core, rationalism consists of three basic claims. For one to consider themselves a rationalist, they must adopt at least one of these three claims: The Intuition/Deduction Thesis, The Innate Knowledge Thesis, or The Innate Concept Thesis. In addition, rationalists can choose to adopt the claims of Indispensability of Reason and or the Superiority of Reason – although one can be a rationalist without adopting either thesis.

The Intuition/Deduction Thesis

Rationale: "Some propositions in a particular subject area, S, are knowable by us by intuition alone; still others are knowable by being deduced from intuited propositions."[12]
Generally speaking, intuition is a priori knowledge or experiential belief characterized by its immediacy; a form of rational insight. We simply "see" something in such a way as to give us a warranted belief. Beyond that, the nature of intuition is hotly debated.

In the same way, generally speaking, deduction is the process of reasoning from one or more general premises to reach a logically certain conclusion. Using valid arguments, we can deduce from intuited premises.

For example, when we combine both concepts, we can intuit that the number three is prime and that it is greater than two. We then deduce from this knowledge that there is a prime number greater than two. Thus, it can be said that intuition and deduction combined to provide us with a priori knowledge – we gained this knowledge independently of sense experience.

Empiricists such as David Hume have been willing to accept this thesis for describing the relationships among our own concepts.[12] In this sense, empiricists argue that we are allowed to intuit and deduce truths from knowledge that has been obtained a posteriori.

By injecting different subjects into the Intuition/Deduction thesis, we are able to generate different arguments. Most rationalists agree mathematics is knowable by applying the intuition and deduction. Some go further to include ethical truths into the category of things knowable by intuition and deduction. Furthermore, some rationalists also claim metaphysics is knowable in this thesis.

In addition to different subjects, rationalists sometimes vary the strength of their claims by adjusting their understanding of the warrant. Some rationalists understand warranted beliefs to be beyond even the slightest doubt; others are more conservative and understand the warrant to be belief beyond a reasonable doubt.

Rationalists also have different understanding and claims involving the connection between intuition and truth. Some rationalists claim that intuition is infallible and that anything we intuit to be true is as such. More contemporary rationalists accept that intuition is not always a source of certain knowledge – thus allowing for the possibility of a deceiver who might cause the rationalist to intuit a false proposition in the same way a third party could cause the rationalist to have perceptions of nonexistent objects.

Naturally, the more subjects the rationalists claim to be knowable by the Intuition/Deduction thesis, the more certain they are of their warranted beliefs, and the more strictly they adhere to the infallibility of intuition, the more controversial their truths or claims and the more radical their rationalism.[12]

To argue in favor of this thesis, Gottfried Wilhelm Leibniz, a prominent German philosopher, says, "The senses, although they are necessary for all our actual knowledge, are not sufficient to give us the whole of it, since the senses never give anything but instances, that is to say particular or individual truths. Now all the instances which confirm a general truth, however numerous they may be, are not sufficient to establish the universal necessity of this same truth, for it does not follow that what happened before will happen in the same way again. … From which it appears that necessary truths, such as we find in pure mathematics, and particularly in arithmetic and geometry, must have principles whose proof does not depend on instances, nor consequently on the testimony of the senses, although without the senses it would never have occurred to us to think of them…"[13]

The Innate Knowledge Thesis

Rationale: "We have knowledge of some truths in a particular subject area, S, as part of our rational nature."[14]

The Innate Knowledge thesis is similar to the Intuition/Deduction thesis in the regard that both theses claim knowledge is gained a priori. The two theses go their separate ways when describing how that knowledge is gained. As the name, and the rationale, suggests, the Innate Knowledge thesis claims knowledge is simply part of our rational nature. Experiences can trigger a process that allows this knowledge to come into our consciousness, but the experiences don't provide us with the knowledge itself. The knowledge has been with us since the beginning and the experience simply brought into focus, in the same way a photographer can bring the background of a picture into focus by changing the aperture of the lens. The background was always there, just not in focus.

This thesis targets a problem with the nature of inquiry originally postulated by Plato in Meno. Here, Plato asks about inquiry; how do we gain knowledge of a theorem in geometry? We inquire into the matter. Yet, knowledge by inquiry seems impossible.[15] In other words, "If we already have the knowledge, there is no place for inquiry. If we lack the knowledge, we don't know what we are seeking and cannot recognize it when we find it. Either way we cannot gain knowledge of the theorem by inquiry. Yet, we do know some theorems."[14] The Innate Knowledge thesis offers a solution to this paradox. By claiming that knowledge is already with us, either consciously or unconsciously, a rationalist claims we don't really "learn" things in the traditional usage of the word, but rather that we simply bring to light what we already know.

The Innate Concept Thesis

Rationale: "We have some of the concepts we employ in a particular subject area, S, as part of our rational nature."[16]

Similar to the Innate Knowledge thesis, the Innate Concept thesis suggests that some concepts are simply part of our rational nature. These concepts are a priori in nature and sense experience is irrelevant to determining the nature of these concepts (though, sense experience can help bring the concepts to our conscious mind).

Some philosophers, such as John Locke (who is considered one of the most influential thinkers of the Enlightenment and an empiricist) argue that the Innate Knowledge thesis and the Innate Concept thesis are the same.[17] Other philosophers, such as Peter Carruthers, argue that the two theses are distinct from one another. As with the other theses covered under the umbrella of rationalism, the more types and greater number of concepts a philosopher claims to be innate, the more controversial and radical their position; "the more a concept seems removed from experience and the mental operations we can perform on experience the more plausibly it may be claimed to be innate. Since we do not experience perfect triangles but do experience pains, our concept of the former is a more promising candidate for being innate than our concept of the latter.[16]

In his book, Meditations on First Philosophy,[18] René Descartes postulates three classifications for our ideas when he says, "Among my ideas, some appear to be innate, some to be adventitious, and others to have been invented by me. My understanding of what a thing is, what truth is, and what thought is, seems to derive simply from my own nature. But my hearing a noise, as I do now, or seeing the sun, or feeling the fire, comes from things which are located outside me, or so I have hitherto judged. Lastly, sirens, hippogriffs and the like are my own invention."[19]

Adventitious ideas are those concepts that we gain through sense experiences, ideas such as the sensation of heat, because they originate from outside sources; transmitting their own likeness rather than something else and something you simply cannot will away. Ideas invented by us, such as those found in mythology, legends, and fairy tales are created by us from other ideas we possess. Lastly, innate ideas, such as our ideas of perfection, are those ideas we have as a result of mental processes that are beyond what experience can directly or indirectly provide.

Gottfried Wilhelm Leibniz defends the idea of innate concepts by suggesting the mind plays a role in determining the nature of concepts, to explain this, he likens the mind to a block of marble in the New Essays on Human Understanding, "This is why I have taken as an illustration a block of veined marble, rather than a wholly uniform block or blank tablets, that is to say what is called tabula rasa in the language of the philosophers. For if the soul were like those blank tablets, truths would be in us in the same way as the figure of Hercules is in a block of marble, when the marble is completely indifferent whether it receives this or some other figure. But if there were veins in the stone which marked out the figure of Hercules rather than other figures, this stone would be more determined thereto, and Hercules would be as it were in some manner innate in it, although labour would be needed to uncover the veins, and to clear them by polishing, and by cutting away what prevents them from appearing. It is in this way that ideas and truths are innate in us, like natural inclinations and dispositions, natural habits or potentialities, and not like activities, although these potentialities are always accompanied by some activities which correspond to them, though they are often imperceptible."[20]

The other two theses

The three aforementioned theses of Intuition/Deduction, Innate Knowledge, and Innate Concept are the cornerstones of rationalism. To be considered a rationalist, one must adopt at least one of those three claims. The following two theses are traditionally adopted by rationalists, but they aren't essential to the rationalist's position.

The Indispensability of Reason Thesis has the following rationale, "The knowledge we gain in subject area, S, by intuition and deduction, as well as the ideas and instances of knowledge in S that are innate to us, could not have been gained by us through sense experience."[3] In short, this thesis claims that experience cannot provide what we gain from reason.

The Superiority of Reason Thesis has the following rationale, '"The knowledge we gain in subject area S by intuition and deduction or have innately is superior to any knowledge gained by sense experience".[3] In other words, this thesis claims reason is superior to experience as a source for knowledge.

In addition to the following claims, rationalists often adopt similar stances on other aspects of philosophy. Most rationalists reject skepticism for the areas of knowledge they claim are knowable a priori. Naturally, when you claim some truths are innately known to us, one must reject skepticism in relation to those truths. Especially for rationalists who adopt the Intuition/Deduction thesis, the idea of epistemic foundationalism tends to crop up. This is the view that we know some truths without basing our belief in them on any others and that we then use this foundational knowledge to know more truths.[3]

Background

Rationalism - as an appeal to human reason as a way of obtaining knowledge - has a philosophical history dating from antiquity. The analytical nature of much of philosophical enquiry, the awareness of apparently a priori domains of knowledge such as mathematics, combined with the emphasis of obtaining knowledge through the use of rational faculties (commonly rejecting, for example, direct revelation) have made rationalist themes very prevalent in the history of philosophy.

Since the Enlightenment, rationalism is usually associated with the introduction of mathematical methods into philosophy as seen in the works of Descartes, Leibniz, and Spinoza.[5] This is commonly called continental rationalism, because it was predominant in the continental schools of Europe, whereas in Britain empiricism dominated.

Even then, the distinction between rationalists and empiricists was drawn at a later period and would not have been recognized by the philosophers involved. Also, the distinction between the two philosophies is not as clear-cut as is sometimes suggested; for example, Descartes and Locke have similar views about the nature of human ideas.[6]

Proponents of some varieties of rationalism argue that, starting with foundational basic principles, like the axioms of geometry, one could deductively derive the rest of all possible knowledge. The philosophers who held this view most clearly were Baruch Spinoza and Gottfried Leibniz, whose attempts to grapple with the epistemological and metaphysical problems raised by Descartes led to a development of the fundamental approach of rationalism. Both Spinoza and Leibniz asserted that, in principle, all knowledge, including scientific knowledge, could be gained through the use of reason alone, though they both observed that this was not possible in practice for human beings except in specific areas such as mathematics. On the other hand, Leibniz admitted in his book Monadology that "we are all mere Empirics in three fourths of our actions."[7]

History

Rationalist philosophy from antiquity

Although rationalism in its modern form post-dates antiquity, philosophers from this time laid down the foundations of rationalism.[citation needed] In particular, the understanding that we may be aware of knowledge available only through the use of rational thought.[citation needed]

Pythagoras (570–495 BCE)

Pythagoras was one of the first Western philosophers to stress rationalist insight.[21] He is often revered as a great mathematician, mystic and scientist, but he is best known for the Pythagorean theorem, which bears his name, and for discovering the mathematical relationship between the length of strings on lute and the pitches of the notes. Pythagoras "believed these harmonies reflected the ultimate nature of reality. He summed up the implied metaphysical rationalism in the words "All is number". It is probable that he had caught the rationalist's vision, later seen by Galileo (1564–1642), of a world governed throughout by mathematically formulable laws".[21] It has been said that he was the first man to call himself a philosopher, or lover of wisdom.[22]

Plato (427–347 BCE)

Plato held rational insight to a very high standard, as is seen in his works such as Meno and The Republic. He taught on the Theory of Forms (or the Theory of Ideas)[23][24][25] which asserts that the highest and most fundamental kind of reality is not the material world of change known to us through sensation, but rather the abstract, non-material (but substantial) world of forms (or ideas).[26] For Plato, these forms were accessible only to reason and not to sense.[21] In fact, it is said that Plato admired reason, especially in geometry, so highly that he had the phrase "Let no one ignorant of geometry enter" inscribed over the door to his academy.[27]

Aristotle (384–322 BCE)

Aristotle's main contribution to rationalist thinking was the use of syllogistic logic and its use in argument. Aristotle defines syllogism as "a discourse in which certain (specific) things having been supposed, something different from the things supposed results of necessity because these things are so."[28] Despite this very general definition, Aristotle limits himself to categorical syllogisms which consist of three categorical propositions in his work Prior Analytics.[29] These included categorical modal syllogisms.[30]

Post-Aristotle

Although the three great Greek philosophers disagreed with one another on specific points, they all agreed that rational thought could bring to light knowledge that was self-evident – information that humans otherwise couldn't know without the use of reason. After Aristotle's death, Western rationalistic thought was generally characterized by its application to theology, such as in the works of Augustine, the Islamic philosopher Avicenna and Jewish philosopher and theologian Maimonides. One notable event in the Western timeline was the philosophy of Thomas Aquinas who attempted to merge Greek rationalism and Christian revelation in the thirteenth-century.[21]

Classical rationalism

Early modern rationalism has its roots in the 17th-century Dutch Republic,[31] with some notable intellectual representatives like Hugo Grotius,[32] René Descartes, and Baruch Spinoza.

René Descartes (1596–1650)

Descartes was the first of the modern rationalists and has been dubbed the 'Father of Modern Philosophy.' Much subsequent Western philosophy is a response to his writings,[33][34][35] which are studied closely to this day.
Descartes thought that only knowledge of eternal truths – including the truths of mathematics, and the epistemological and metaphysical foundations of the sciences – could be attained by reason alone; other knowledge, the knowledge of physics, required experience of the world, aided by the scientific method. He also argued that although dreams appear as real as sense experience, these dreams cannot provide persons with knowledge. Also, since conscious sense experience can be the cause of illusions, then sense experience itself can be doubtable. As a result, Descartes deduced that a rational pursuit of truth should doubt every belief about sensory reality. He elaborated these beliefs in such works as Discourse on Method, Meditations on First Philosophy, and Principles of Philosophy. Descartes developed a method to attain truths according to which nothing that cannot be recognised by the intellect (or reason) can be classified as knowledge. These truths are gained "without any sensory experience," according to Descartes. Truths that are attained by reason are broken down into elements that intuition can grasp, which, through a purely deductive process, will result in clear truths about reality.

Descartes therefore argued, as a result of his method, that reason alone determined knowledge, and that this could be done independently of the senses. For instance, his famous dictum, cogito ergo sum or "I think, therefore I am", is a conclusion reached a priori i.e., prior to any kind of experience on the matter. The simple meaning is that doubting one's existence, in and of itself, proves that an "I" exists to do the thinking. In other words, doubting one's own doubting is absurd.[36] This was, for Descartes, an irrefutable principle upon which to ground all forms of other knowledge. Descartes posited a metaphysical dualism, distinguishing between the substances of the human body ("res extensa") and the mind or soul ("res cogitans"). This crucial distinction would be left unresolved and lead to what is known as the mind-body problem, since the two substances in the Cartesian system are independent of each other and irreducible.

Baruch Spinoza (1632–1677)

The philosophy of Baruch Spinoza is a systematic, logical, rational philosophy developed in seventeenth-century Europe.[37][38][39] Spinoza's philosophy is a system of ideas constructed upon basic building blocks with an internal consistency with which he tried to answer life's major questions and in which he proposed that "God exists only philosophically."[39][40] He was heavily influenced by Descartes,[41] Euclid[40] and Thomas Hobbes,[41] as well as theologians in the Jewish philosophical tradition such as Maimonides.[41] But his work was in many respects a departure from the Judeo-Christian tradition. Many of Spinoza's ideas continue to vex thinkers today and many of his principles, particularly regarding the emotions, have implications for modern approaches to psychology. To this day, many important thinkers have found Spinoza's "geometrical method"[39] difficult to comprehend: Goethe admitted that he found this concept confusing[citation needed]. His magnum opus, Ethics, contains unresolved obscurities and has a forbidding mathematical structure modeled on Euclid's geometry.[40] Spinoza's philosophy attracted believers such as Albert Einstein[42] and much intellectual attention.[43][44][45][46][47]

Gottfried Leibniz (1646–1716)

Leibniz was the last of the great Rationalists who contributed heavily to other fields such as metaphysics, epistemology, logic, mathematics, physics, jurisprudence, and the philosophy of religion; he is also considered to be one of the last "universal geniuses".[48] He did not develop his system, however, independently of these advances. Leibniz rejected Cartesian dualism and denied the existence of a material world. In Leibniz's view there are infinitely many simple substances, which he called "monads" (possibly taking the term from the work of Anne Conway).
Leibniz developed his theory of monads in response to both Descartes and Spinoza, because the rejection of their visions forced him to arrive at his own solution. Monads are the fundamental unit of reality, according to Leibniz, constituting both inanimate and animate objects. These units of reality represent the universe, though they are not subject to the laws of causality or space (which he called "well-founded phenomena"). Leibniz, therefore, introduced his principle of pre-established harmony to account for apparent causality in the world.

Immanuel Kant (1724–1804)

Kant is one of the central figures of modern philosophy, and set the terms by which all subsequent thinkers have had to grapple. He argued that human perception structures natural laws, and that reason is the source of morality. His thought continues to hold a major influence in contemporary thought, especially in fields such as metaphysics, epistemology, ethics, political philosophy, and aesthetics.[49]
Kant named his brand of epistemology "Transcendental Idealism", and he first laid out these views in his famous work The Critique of Pure Reason. In it he argued that there were fundamental problems with both rationalist and empiricist dogma. To the rationalists he argued, broadly, that pure reason is flawed when it goes beyond its limits and claims to know those things that are necessarily beyond the realm of all possible experience: the existence of God, free will, and the immortality of the human soul. Kant referred to these objects as "The Thing in Itself" and goes on to argue that their status as objects beyond all possible experience by definition means we cannot know them. To the empiricist he argued that while it is correct that experience is fundamentally necessary for human knowledge, reason is necessary for processing that experience into coherent thought. He therefore concludes that both reason and experience are necessary for human knowledge. In the same way, Kant also argued that it was wrong to regard thought as mere analysis. "In Kant's views, a priori concepts do exist, but if they are to lead to the amplification of knowledge, they must be brought into relation with empirical data".[50]

Contemporary rationalism

Rationalism has become a rarer label tout court of philosophers today; rather many different kinds of specialised rationalisms are identified. For example, Robert Brandom has appropriated the terms rationalist expressivism and rationalist pragmatism as labels for aspects of his programme in Articulating Reasons, and identified linguistic rationalism, the claim that the content of propositions "are essentially what can serve as both premises and conclusions of inferences", as a key thesis of Wilfred Sellars.[51]

Criticism

Rationalism was criticized by William James for being out of touch with reality. James also criticized rationalism for representing the universe as a closed system, which contrasts to his view that the universe is an open system.

Five Technologies Needed to Survive Deep Space Exploration

 

Top Five Technologies Needed to Survive Deep Space
Artist rendering of NASA’s Orion spacecraft as it travels 40,000 miles past the Moon during Exploration Mission-1, its first integrated flight with the Space Launch System rocket.

When a spacecraft built for humans ventures into deep space, it requires an array of features to keep it and a crew inside safe. Both distance and duration demand that spacecraft must have systems that can reliably operate far from home, be capable of keeping astronauts alive in case of emergencies and still be light enough that a rocket can launch it.

Missions near the Moon will start when NASA’s Orion spacecraft leaves Earth atop the world’s most powerful rocket, NASA’s Space Launch System. After launch from the agency’s Kennedy Space Center in Florida, Orion will travel beyond the Moon to a distance more than 1,000 times farther than where the International Space Station flies in low-Earth orbit, and farther than any spacecraft built for humans has ever ventured. To accomplish this feat, Orion has built-in technologies that enable the crew and spacecraft to explore far into the solar system.

Systems to Live and Breathe

As humans travel farther from Earth for longer missions, the systems that keep them alive must be highly reliable while taking up minimal mass and volume. Orion will be equipped with advanced environmental control and life support systems designed for the demands of a deep space mission. A high-tech system already being tested aboard the space station will remove carbon dioxide (CO2) and humidity from inside Orion. Removal of CO2 and humidity is important to ensure air remains safe for the crew breathing. And water condensation on the vehicle hardware is controlled to prevent water intrusion into sensitive equipment or corrosion on the primary pressure structure.

The system also saves volume inside the spacecraft. Without such technology, Orion would have to carry many chemical canisters that would otherwise take up the space of 127 basketballs (or 32 cubic feet) inside the spacecraft—about 10 percent of crew livable area. Orion will also have a new compact toilet, smaller than the one on the space station. Long duration missions far from Earth drive engineers to design compact systems not only to maximize available space for crew comfort, but also to accommodate the volume needed to carry consumables like enough food and water for the entirety of a mission lasting days or weeks.

Highly reliable systems are critically important when distant crew will not have the benefit of frequent resupply shipments to bring spare parts from Earth, like those to the space station. Even small systems have to function reliably to support life in space, from a working toilet to an automated fire suppression system or exercise equipment that helps astronauts stay in shape to counteract the zero-gravity environment in space that can cause muscle and bone atrophy. Distance from home also demands that Orion have spacesuits capable of keeping astronaut alive for six days in the event of cabin depressurization to support a long trip home.

Proper Propulsion

The farther into space a vehicle ventures, the more capable its propulsion systems need to be to maintain its course on the journey with precision and ensure its crew can get home.

Orion has a highly capable service module that serves as the powerhouse for the spacecraft, providing propulsion capabilities that enable Orion to go around the Moon and back on its exploration missions. The service module has 33 engines of various sizes. The main engine will provide major in-space maneuvering capabilities throughout the mission, including inserting Orion into lunar orbit and also firing powerfully enough to get out of the Moon’s orbit to return to Earth. The other 32 engines are used to steer and control Orion on orbit.

In part due to its propulsion capabilities, including tanks that can hold nearly 2,000 gallons of propellant and a back up for the main engine in the event of a failure, Orion’s service module is equipped to handle the rigors of travel for missions that are both far and long, and has the ability to bring the crew home in a variety of emergency situations.

The Ability to Hold Off the Heat

Going to the Moon is no easy task, and it’s only half the journey. The farther a spacecraft travels in space, the more heat it will generate as it returns to Earth. Getting back safely requires technologies that can help a spacecraft endure speeds 30 times the speed of sound and heat twice as hot as molten lava or half as hot as the sun.

When Orion returns from the Moon, it will be traveling nearly 25,000 mph, a speed that could cover the distance from Los Angeles to New York City in six minutes. Its advanced heat shield, made with a material called AVCOAT, is designed to wear away as it heats up. Orion’s heat shield is the largest of its kind ever built and will help the spacecraft withstand temperatures around 5,000 degrees Fahrenheit during reentry though Earth’s atmosphere.

Before reentry, Orion also will endure a 700-degree temperature range from about minus 150 to 550 degrees Fahrenheit. Orion’s highly capable thermal protection system, paired with thermal controls, will protect Orion during periods of direct sunlight and pitch black darkness while its crews will comfortably enjoy a safe and stable interior temperature of about 77 degrees Fahrenheit.

Radiation Protection

As a spacecraft travels on missions beyond the protection of Earth’s magnetic field, it will be exposed to a harsher radiation environment than in low-Earth orbit with greater amounts of radiation from charged particles and solar storms that can cause disruptions to critical computers, avionics and other equipment. Humans exposed to large amounts of radiation can experience both acute and chronic health problems ranging from near-term radiation sickness to the potential of developing cancer in the long-term.

Orion was designed from the start with built in system-level features to ensure reliability of essential elements of the spacecraft during potential radiation events. For example, Orion is equipped with four identical computers that each are self-checking, plus an entirely different backup computer, to ensure Orion can still send commands in the event of a disruption. Engineers have tested parts and systems to a high standard to ensure that all critical systems remain operable even under extreme circumstances.

Orion also has a makeshift storm shelter below the main deck of the crew module. In the event of a solar radiation event, NASA has developed plans for crew on board to create a temporary shelter inside using materials on board. A variety of radiation sensors will also be on the spacecraft to help scientists better understand the radiation environment far away from Earth. One investigation called AstroRad, will fly on Exploration Mission-1 and test an experimental vest that has the potential to help shield vital organs and decrease exposure from solar particle events.

Constant Communication and Navigation

Spacecraft venturing far from home go beyond the Global Positioning System (GPS) in space and above communication satellites in Earth orbit. To talk with mission control in Houston, Orion’s communication and navigation systems will switch from NASA’s Tracking and Data Relay Satellites (TDRS) system used by the International Space Station, and communicate through the Deep Space Network.

Orion is also equipped with backup communication and navigation systems to help the spacecraft stay in contact with the ground and orient itself if it’s primary systems fail. The backup navigation system, a relatively new technology called optical navigation, uses a camera to take pictures of the Earth, Moon and stars and autonomously triangulate Orion’s position from the photos. Its backup emergency communications system doesn’t use the primary system or antennae for high-rate data transfer.

A scientist's final paper looks toward Earth's future climate

From space, satellites can see Earth breathe. This visualization shows 20 years of continuous observations of plant life on land and at the ocean’s surface, from September 1997 to September. 2017. On land, vegetation appears on a scale from brown (low vegetation) to dark green (lots of vegetation); at the ocean surface, phytoplankton are indicated on a scale from purple (low) to yellow (high). Credit: NASA
By Patrick Lynch,
NASA's Goddard Space Flight Center

Original link:  https://climate.nasa.gov/news/2765/a-scientists-final-paper-looks-toward-earths-future-climate/

A NASA scientist's final scientific paper, published posthumously this month, reveals new insights into one of the most complex challenges of Earth's climate: understanding and predicting future atmospheric levels of greenhouse gases and the role of the ocean and land in determining those levels.

A paper published in the Proceedings of the National Academy of Sciences was led by Piers J. Sellers, former director of the Earth Sciences Division at NASA's Goddard Space Flight Center, who died in December 2016. Sellers was an Earth scientist at NASA Goddard and later an astronaut who flew on three space shuttle missions.

The paper includes a significant overarching message: The current international fleet of satellites is making real improvements in accurately measuring greenhouse gases from space, but in the future a more sophisticated system of observations will be necessary to understand and predict Earth's changing climate at the level of accuracy needed by society.
In a 2016 interview, Piers Sellers talked about his enthusiasm and appreciation for working at NASA’s Goddard Space Flight Center. Credit: NASA’s Goddard Space Flight Center
Sellers wrote the paper along with colleagues at NASA's Jet Propulsion Laboratory and the University of Oklahoma. Work on the paper began in 2015, and Sellers continued working with his collaborators up until about six weeks before he died. They carried on the research and writing of the paper until its publication this week.

The paper focuses on the topic that was at the center of Sellers' research career: Earth's biosphere and its interactions with the planet's climate. In the 1980s he helped pioneer computer modeling of Earth's vegetation. In the new paper, Sellers and co-authors investigated "carbon cycle–climate feedbacks" – the potential response of natural systems to climate change caused by human emissions – and laid out a vision for how to best measure this response on a global scale from space.

The exchange of carbon between the land, ocean and air plays a huge role in determining the amount of greenhouse gases in the atmosphere, which will largely determine Earth's future climate. But, there are complex interactions at play. While human-caused emissions of greenhouses gases are building up in the atmosphere, land ecosystems and the ocean still offset about 50 percent of all those emissions. As the climate warms scientists are unsure whether forests and the ocean will continue to absorb roughly half of the emissions – acting as a carbon sink – or if this offset becomes lower, or if the sinks become carbon sources.

Paper co-author David Schimel, a scientist at JPL and a longtime scientific collaborator of Sellers', said the paper captured how he, Sellers and the other co-authors saw this scientific problem as one of the critical research targets for NASA Earth science.

"We all saw understanding the future of carbon cycle feedbacks as one of the grand challenges of climate change science," Schimel said.

Scientists' understanding of how Earth's living systems interact with rising atmospheric levels of greenhouse gases has changed tremendously in recent decades, said co-author Berrien Moore III, of the University of Oklahoma. Moore has been a scientific collaborator with Sellers and Schimel since the 1980s. He said that back then, scientists thought the ocean absorbed about half of annual carbon emissions, while plants on land played a minimal role. Scientists now understand the ocean and land together absorb about half of all emissions, with the terrestrial system’s role being affected greatly by large-scale weather patterns such as El Niño and La Niña. Moore is also the principal investigator of a NASA mission called GeoCarb, scheduled to launch in 2022, that will monitor greenhouse gases over much of the Western Hemisphere from a geostationary orbit.

NASA launched the Orbiting Carbon Observatory-2 (OCO-2) in 2014, and with the advancement of measurement and computer modeling techniques, scientists are gaining a better understanding of how carbon moves through the land, ocean and atmosphere. This new paper builds on previous research and focuses on a curious chain of events in 2015. While human emissions of carbon dioxide leveled off for the first time in decades during that year, the growth rate in atmospheric concentrations of carbon dioxide actually spiked at the same time.

This was further evidence of what scientists had been piecing together for years – that a complex combination of factors, including weather, drought, fires and more, contributes to greenhouse gas levels in the atmosphere.

However, with the new combination of OCO-2 observations and space-based measurements of plant fluorescence (essentially a measure of photosynthesis), researchers have begun producing more accurate estimates of where carbon was absorbed and released around the planet during 2015, when an intense El Niño was in effect, compared to other years.

The paper follows a report from a 2015 workshop on the carbon cycle led by Sellers, Schimel, and Moore. Schimel and Moore both pointed out that every one of the more than 40 participants in the workshop contributed to a final scientific report from the meeting – a rare occurrence. They attributed this, in part, to the inspirational role Sellers played in spurring thought and action.

"When you have someone like Piers in the room, there's a magnetic effect," Moore said. "Piers had his shoulder to the wheel, so everyone had to have their shoulders to the wheel."

Schimel and Moore said the workshop paper lays out a vision for what's needed in a future space-based observing system to measure, understand, and predict carbon cycle feedbacks: active and passive instruments, and satellites both in low-Earth and geostationary orbits around the world. In the coming years, NASA and space agencies in Europe, Japan, and China, will all launch new greenhouse-gas monitoring missions.

"Piers thought it's absolutely essential to get it right," said Schimel, "and essential to more or less get it right the first time."

The authors dedicated the paper's publication to Sellers, and in their dedication referenced a Winston Churchill quote often cited by the British-born scientist. They wrote: "P.J.S. approached the challenge of carbon science in the spirt of a favorite Churchill quote, 'Difficulties mastered are opportunities won,' and he aimed to resolve the carbon–climate problem by rising to the difficulties and seizing the opportunities."

For more: http://www.pnas.org/content/early/2018/07/05/1716613115

A living programmable biocomputing device based on RNA

Can sense and analyze multiple complex signals in living cells for future synthetic diagnostics and therapeutics?
July 28, 2017
Original link:  http://www.kurzweilai.net/a-living-programmable-biocomputing-device-based-on-rna
“Ribocomputing devices” ( yellow) developed by a team at the Wyss Institute can now be used by synthetic biologists to sense and interpret multiple signals in cells and logically instruct their ribosomes (blue and green) to produce different proteins. (credit: Wyss Institute at Harvard University)

Synthetic biologists at Harvard’s Wyss Institute for Biologically Inspired Engineering and associates have developed a living programmable “ribocomputing” device based on networks of precisely designed, self-assembling synthetic RNAs (ribonucleic acid). The RNAs can sense multiple biosignals and make logical decisions to control protein production with high precision.

As reported in Nature, the synthetic biological circuits could be used to produce drugs, fine chemicals, and biofuels or detect disease-causing agents and release therapeutic molecules inside the body. The low-cost diagnostic technologies may even lead to nanomachines capable of hunting down cancer cells or switching off aberrant genes.

Biological logic gates

Similar to a digital circuit, these synthetic biological circuits can process information and make logic-guided decisions, using basic logic operations — AND, OR, and NOT. But instead of detecting voltages, the decisions are based on specific chemicals or proteins, such as toxins in the environment, metabolite levels, or inflammatory signals. The specific ribocomputing parts can be readily designed on a computer.

E. coli bacteria engineered to be ribocomputing devices output a green-glowing protein when they detect a specific set of programmed RNA molecules as input signals (credit: Harvard University)

The research was performed with E. coli bacteria, which regulate the expression of a fluorescent (glowing) reporter protein when the bacteria encounter a specific complex set of intra-cellular stimuli. But the researchers believe ribocomputing devices can work with other host organisms or in extracellular settings.

Previous synthetic biological circuits have only been able to sense a handful of signals, giving them an incomplete picture of conditions in the host cell. They are also built out of different types of molecules, such as DNAs, RNAs, and proteins, that must find, bind, and work together to sense and process signals. Identifying molecules that cooperate well with one another is difficult and makes development of new biological circuits a time-consuming and often unpredictable process.

Brain-like neural networks next

Ribocomputing devices could also be freeze-dried on paper, leading to paper-based biological circuits, including diagnostics that can sense and integrate several disease-relevant signals in a clinical sample, the researchers say.

The next stage of research will focus on the use of RNA “toehold” technology* to produce neural networks within living cells — circuits capable of analyzing a range of excitatory and inhibitory inputs, averaging them, and producing an output once a particular threshold of activity is reached. (Similar to how a neuron averages incoming signals from other neurons.)

Ultimately, researchers hope to induce cells to communicate with one another via programmable molecular signals, forming a truly interactive, brain-like network, according to lead author Alex Green, an assistant professor at Arizona State University’s Biodesign Institute.

Wyss Institute Core Faculty member Peng Yin, Ph.D., who led the study, is also Professor of Systems Biology at Harvard Medical School.

The study was funded by the Wyss Institute’s Molecular Robotics Initiative, a Defense Advanced Research Projects Agency (DARPA) Living Foundries grant, and grants from the National Institute of Health (NIH), the Office of Naval Research (ONR), the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).

* The team’s approach evolved from its previous development of “toehold switches” in 2014 — programmable hairpin-like nano-structures made of RNA. In principle, RNA toehold wwitches can control the production of a specific protein: when a desired complementary “trigger” RNA, which can be part of the cell’s natural RNA repertoire, is present and binds to the toehold switch, the hairpin structure breaks open. Only then will the cell’s ribosomes get access to the RNA and produce the desired protein.

Wyss Institute | Mechanism of the Toehold Switch


Abstract of Complex cellular logic computation using ribocomputing devices

Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our ‘ribocomputing’ systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...