In an article for the World Economic Forum, Lewis Gilbert, CEO of the Institute on the Environment at the University of Minnesota, concluded that CO2 levels
in the atmosphere, as quantified by the Keeling Curve of the Maona Loa
observatorium data, are continuing to rise. None of our political and
public discussions, protocols and negotiations, and general public
awareness have had any effect on this curve.
Gilbert argued that political control of CO2 emissions is
not feasible, as it is against human nature. It seems to be a driver of
human evolution, and as such it is obviously imprinted in our very DNA:
humankind and our wealth are intended to grow, against all opposing
forces.
If this hypothesis is valid, regulatory operations are meaningless,
as they will be circumvented. The only way out is to change the whole
system so that growth is decoupled from CO2 generation.
Disruptive technologies are one way of achieving this, and system
thinking, system science and nanosciences have an important role to
play.
As scientists, we believe there is reason to be hopeful because
CO2-neutral and CO2-negative technologies could bring huge economic
benefits. But for that to happen, there must also be the right social
and political support for such innovations. Here are five ways we can
start doing that:
1. Stop subsidizing technologies like biogas, bioethanol or biodiesel
Their impact is debatable and they are still CO2-driven growth
operations. Such support was meant to be positive, but instead leaves
the non-subsidized innovations to compete with other technologies that
were only ever competitive because of the huge support they received.
2. Invest in disruptive technologies
Disruptive technologies are not currently receiving the support they
need. We need an alternative culture of investment and entrepreneurship
driving these disruptive changes. Those countries that don’t do well
under the current system, which is dominated by CO2-based industries and
technologies, should see these disruptions as a big opportunity. This
is mostly a question of social attitudes about accepting risks in our
lives. Governments should also be investing in research centres
searching for new disruptive technologies. In many European countries,
the opposite is currently true: research topics are increasingly
dictated by industry, and policy-makers prefer scientists to “think
within the box”.
3. Prioritise cheap energy storage
The European example, with widespread penetration of sustainable
energy sources (wind, water and solar), points to the need for cheap
energy storage, preferably at least 10 times cheaper than current
technologies in the medium term (€200/kWh stored). Cheap technology that
allows reversible storage of electrical energy in the form of chemical
bonds (fuels) could be the disruptive technology we’re looking for.
Nanotechnology can unblock bottlenecks in the direct conversion of solar
energy to fuel, and power to gas and fuel cells.
4. Promote biomass-based technologies
Biomass-based technologies that take the waste products of
agriculture and turn them into non-combustible materials (e.g. pavements
and buildings) are by definition CO2 negative and have the potential to
cure the wider CO2 problem (planet Earth is binding six times more
carbon in plants than the current carbon footprint of whole humankind;
waste streams alone can address CO2 release).
Such techniques might even lower atmospheric CO2. Using the enormous
amounts of carbon materials created in this way for soil improvement
could help fertilize anthropogenic badlands. This would be an elegant
way of tackling CO2 while aiding nutrition and economic growth. If CO2
emissions are not quickly reduced to the required level, then these
CO2-negative technologies may be needed to keep global warming within
the 2°C limit.
5. Develop oxy-fuels
Oxygen from cheap artificial photosynthesis based on nanoscience (or
cheap water electrolysis from solar electricity) could support
“oxy-fuel” technology. The oxygen created in this way does not need much
purification and could be combined with solar hydrogen to form the base
of CO2-neutral liquid transportation fuels. This will depend on the
application of sophisticated chemistry and the use of nanocatalysis. But
it is a valid approach to transforming low-value carbon sources into
high-value chemical and fuels, while using carbon as a (CO2-neutral)
carrier and the energy of the sun. Such super-plants could in principle
be built now, with the only barrier being that current stakeholders want
to sell a different, older technology.
We believe that climate control is not a problem of technology, but a
problem of system thinking and social and private values. If we can
decouple economic growth from CO2 production, and develop new processes
that generate value by reducing CO2, solutions to climate issues will naturally follow.
Authors: Markus Antonietti is Director of the Max Planck
Institute of Colloids and Interfaces at the Max Planck Institute for
Evolutionary Biology; Joost Reek is Professor of Homogeneous,
Supramolecular and Bio-Inspired Catalysis at the University of
Amsterdam, and director of van ‘t Hoff institute for molecular sciences.
Both are members of the World Economic Forum Global Agenda Council on Nanotechnology. The authors kindly acknowledge fruitful discussions with prof B. van der Zwaan from ECN/UvA.
Image: The sun rises over Argentina’s Perito Moreno glacier near
the city of El Calafate, in the Patagonian province of Santa Cruz,
December 16, 2009. REUTERS/Marcos Brindicci
Tabula rasa (/ˈtæbjələˈrɑːsə, -zə, ˈreɪ-/) refers to the epistemological idea that individuals are born without built-in mental content and that therefore all knowledge comes from experience or perception. Proponents of tabula rasa generally disagree with the doctrine of innatism which holds that the mind is born already in possession of certain knowledge. Generally, proponents of the tabula rasa theory also favour the "nurture" side of the nature versus nurture debate when it comes to aspects of one's personality, social and emotional behaviour, knowledge and sapience.
History
Tabula rasa is a Latin phrase often translated as "blank slate" in English and originates from the Roman tabula used for notes, which was blanked by heating the wax and then smoothing it.[1] This roughly equates to the English term "blank slate" (or, more literally, "erased slate") which refers to the emptiness of a slate
prior to it being written on with chalk. Both may be renewed
repeatedly, by melting the wax of the tablet or by erasing the chalk on
the slate.
Philosophy
In Western philosophy, the concept of tabula rasa can be traced back to the writings of Aristotle who writes in his treatise "Περί Ψυχῆς" (De Anima or On the Soul) of the "unscribed tablet." In one of the more well-known passages of this treatise he writes that:
Haven't we already disposed of the difficulty about
interaction involving a common element, when we said that mind is in a
sense potentially whatever is thinkable, though actually it is nothing
until it has thought? What it thinks must be in it just as characters
may be said to be on a writing-tablet on which as yet nothing stands
written: this is exactly what happens with mind.[2]
This idea was further developed in Ancient Greek philosophy by the Stoic
school. Stoic epistemology emphasizes that the mind starts blank, but
acquires knowledge as the outside world is impressed upon it.[3] The doxographer Aetius
summarizes this view as "When a man is born, the Stoics say, he has the
commanding part of his soul like a sheet of paper ready for writing
upon."[4]Diogenes Laërtius attributes a similar belief to the Stoic Zeno of Citium when he writes in Lives and Opinions of Eminent Philosophers that:
Perception, again, is an impression produced on the mind,
its name being appropriately borrowed from impressions on wax made by a
seal; and perception they divide into, comprehensible and
incomprehensible: Comprehensible, which they call the criterion of
facts, and which is produced by a real object, and is, therefore, at the
same time conformable to that object; Incomprehensible, which has no
relation to any real object, or else, if it has any such relation, does
not correspond to it, being but a vague and indistinct representation.[5]
In the eleventh century, the theory of tabula rasa was developed more clearly by the Persian philosopher Avicenna (Ibn Sina in Arabic). He argued that the "...human intellect at birth resembled a tabula rasa, a pure potentiality that is actualized through education and comes to know," and that knowledge is attained through "...empirical familiarity with objects in this world from which one abstracts universal concepts," which develops through a "...syllogistic method of reasoning;
observations lead to propositional statements, which when compounded
lead to further abstract concepts." He further argued that the intellect
itself "...possesses levels of development from the static/material
intellect (al-‘aql al-hayulani), that potentiality can acquire knowledge to the active intellect (al-‘aql al-fa‘il), the state of the human intellect at conjunction with the perfect source of knowledge."[6]
In the thirteenth century, St. Thomas Aquinas brought the Aristotelian and Avicennian notions to the forefront of Christian thought. These notions sharply contrasted with the previously held Platonic
notions of the human mind as an entity that preexisted somewhere in the
heavens, before being sent down to join a body here on Earth (see
Plato's Phaedo and Apology, as well as others). St. Bonaventure
(also thirteenth century) was one of the fiercest intellectual
opponents of Aquinas, offering some of the strongest arguments toward
the Platonic idea of the mind.
The writings of Avicenna, Ibn Tufail, and Aquinas on the tabula rasa theory stood unprogressed and untested for several centuries.[citation needed] For example, the late medieval English jurist Sir John Fortescue, in his work In Praise of the Laws of England (Chapter VI), takes for granted the notion of tabula rasa,
stressing it as the basis of the need for the education of the young in
general, and of young princes specifically. "Therefore, Prince, whilst
you are young and your mind is as it were a clean slate, impress on it
these things, lest in future it be impressed more pleasurably with
images of lesser worth." (His igitur, Princeps, dum Adolescens es, et Anima tua velutTabula rasa, depinge eam, ne in futurum ipsa Figuris minoris Frugi delectabilius depingatur.)
The modern idea of the theory, however, is attributed mostly to John Locke's expression of the idea in Essay Concerning Human Understanding (he uses the term "white paper" in Book II, Chap. I, 2). In Locke's philosophy, tabula rasa
was the theory that at birth the (human) mind is a "blank slate"
without rules for processing data, and that data is added and rules for
processing are formed solely by one's sensory experiences. The notion is central to Lockean empiricism. As understood by Locke, tabula rasa meant that the mind of the individual was born blank, and it also emphasized the freedom of individuals to author their own soul.
Individuals are free to define the content of their character—but basic
identity as a member of the human species cannot be altered. This
presumption of a free, self-authored mind combined with an immutable
human nature leads to the Lockean doctrine of "natural" rights. Locke's
idea of tabula rasa is frequently compared with Thomas Hobbes's viewpoint of human nature, in which humans are endowed with inherent mental content—particularly with selfishness.[citation needed]
The eighteenth-century Swiss philosopher Jean-Jacques Rousseau used tabula rasa
to support his argument that warfare is an advent of society and
agriculture, rather than something that occurs from the human state of
nature. Since tabula rasa states that humans are born with a "blank-slate", Rousseau uses this to suggest that humans must learn warfare.
Tabula rasa also features in Sigmund Freud's psychoanalysis. Freud depicted personality traits as being formed by family dynamics (see Oedipus complex).
Freud's theories imply that humans lack free will, but also that
genetic influences on human personality are minimal. In Freudian
psychoanalysis, one is largely determined by one's upbringing.[citation needed]
The tabula rasa concept became popular in social sciences during the twentieth century. Early ideas of eugenics
posited that human intelligence correlated strongly with social class,
but these ideas were rejected, and the idea that genes (or simply
"blood") determined a person's character became regarded as racist. By
the 1970s, scientists such as John Money had come to see gender identity as socially constructed, rather than rooted in genetics.
Science
Psychology and neurobiology
Psychologists and neurobiologists have shown evidence that initially, the entire cerebral cortex
is programmed and organized to process sensory input, control motor
actions, regulate emotion, and respond reflexively (under predetermined
conditions).[8] These programmed mechanisms in the brain subsequently act to learn and refine the ability of the organism.[9][10]
For example, psychologist Steven Pinker showed that—in contrast to written language—the brain is "programmed" to pick up spoken language spontaneously.[11]
There have been claims by a minority in psychology and neurobiology, however, that the brain is tabula rasa
only for certain behaviours. For instance, with respect to one's
ability to acquire both general and special types of knowledge or
skills, Howe argued against the existence of innate talent.[12] There also have been neurological investigations into specific learning and memory functions, such as Karl Lashley's study on mass action and serial interaction mechanisms.
Important evidence against the tabula rasa model of the mind comes from behavioural genetics, especially twin and adoption studies (see below). These indicate strong genetic influences on personal characteristics such as IQ, alcoholism, gender identity, and other traits.[11]
Critically, multivariate studies show that the distinct faculties of
the mind, such as memory and reason, fractionate along genetic
boundaries. Cultural universals such as emotion and the relative resilience of psychological adaptation to accidental biological changes (for instance the David Reimer case of gender reassignment following an accident) also support basic biological mechanisms in the mind.[13]
Social pre-wiring
Twin studies have resulted in important evidence against the tabula rasa model of the mind, specifically, of social behaviour.
The social pre-wiring hypothesis refers to the ontogeny of social interaction. Also informally referred to as, "wired to be social." The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social[14].
Circumstantial evidence supporting the social pre-wiring
hypothesis can be revealed when examining newborns' behaviour. Newborns,
not even hours after birth, have been found to display a preparedness
for social interaction.
This preparedness is expressed in ways such as their imitation of
facial gestures. This observed behaviour cannot be contributed to any
current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behaviour and identity through genetics[14].
Principal evidence of this theory is uncovered by examining twin pregnancies. The main argument is, if there are social behaviours that are inherited and developed before birth, then one should expect twin fetuses to engage in some form of social interaction
before they are born. Thus, ten fetuses were analyzed over a period of
time using ultrasound techniques. Using kinematic analysis, the results
of the experiment were that the twin fetuses would interact with each
other for longer periods and more often as the pregnancies went on.
Researchers were able to conclude that the performance of movements
between the co-twins were not accidental but specifically aimed[14].
The social pre-wiring hypothesis was proved correct, "The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin fetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behaviour:
when the context enables it, as in the case of twin fetuses,
other-directed actions are not only possible but predominant over
self-directed actions."[14].
Computer science
In computer science, tabula rasa
refers to the development of autonomous agents with a mechanism to
reason and plan toward their goal, but no "built-in" knowledge-base of
their environment. Thus they truly are a blank slate.
In reality autonomous agents possess an initial data-set or
knowledge-base, but this cannot be immutable or it would hamper autonomy
and heuristic ability.[citation needed] Even if the data-set is empty, it usually may be argued that there is a built-in bias in the reasoning and planning mechanisms.[citation needed] Either intentionally or unintentionally placed there by the human designer, it thus negates the true spirit of tabula rasa.[15]
A synthetic (programming) language parser (LR(1), LALR(1) or SLR(1), for example) could be considered a special case of a tabula rasa, as it is designed to accept any of a possibly infinite set of source language programs, within a single
programming language, and to output either a good parse of the program,
or a good machine language translation of the program, either of which
represents a success, or, alternately, a failure, and
nothing else. The "initial data-set" is a set of tables which are
generally produced mechanically by a parser table generator, usually
from a BNF representation of the source language, and represents a "table representation" of that single programming language.
Four
vertical layers in new 3D nanosystem chip. Top (fourth layer): sensors
and more than one million carbon-nanotube field-effect transistor
(CNFET) logic inverters; third layer, on-chip non-volatile RRAM (1 Mbit
memory); second layer, CNFET logic with classification accelerator (to
identify sensor inputs); first (bottom) layer, silicon FET logic.
(credit: Max M. Shulaker et al./Nature)
A radical new 3D chip that combines computation and data storage in
vertically stacked layers — allowing for processing and storing massive
amounts of data at high speed in future transformative nanosystems — has
been designed by researchers at Stanford University and MIT.
The new 3D-chip design* replaces silicon with carbon nanotubes
(sheets of 2-D graphene formed into nanocylinders) and integrates resistive random-access memory (RRAM) cells.
Carbon-nanotube field-effect transistors (CNFETs) are an emerging
transistor technology that can scale beyond the limits of silicon
MOSFETs (conventional chips), and promise an order-of-magnitude
improvement in energy-efficient computation. However, experimental
demonstrations of CNFETs so far have been small-scale and limited to
integrating only tens or hundreds of devices (see earlier 2015 Stanford
research, “Skyscraper-style carbon-nanotube chip design…”).
The researchers integrated more than 1 million RRAM cells and 2
million carbon-nanotube field-effect transistors in the chip, making it
the most complex nanoelectronic system ever made with emerging
nanotechnologies, according to the researchers. RRAM is an emerging
memory technology that promises high-capacity, non-volatile data
storage, with improved speed, energy efficiency, and density, compared
to dynamic random-access memory (DRAM).
Instead of requiring separate components, the RRAM cells and carbon
nanotubes are built vertically over one another, creating a dense new 3D
computer architecture** with interleaving layers of logic and memory.
By using ultradense through-chip vias
(electrical interconnecting wires passing between layers), the high
delay with conventional wiring between computer components is
eliminated.
The new 3D nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ
processing of the captured data, and produce “highly processed”
information. “Such complex nanoelectronic systems will be essential for
future high-performance, highly energy-efficient electronic systems,”
the researchers say.
How to combine computation and storage
Illustration of separate CPU (bottom) and RAM memory (top) in current computer architecture (images credit: iStock)
The new chip design aims to replace current chip designs, which
separate computing and data storage, resulting in limited-speed
connections.
Separate 2D chips have been required because “building conventional
silicon transistors involves extremely high temperatures of over 1,000
degrees Celsius,” explains lead author Max Shulaker,
an assistant professor of electrical engineering and computer science
at MIT and lead author of a paper published July 5, 2017 in the journal Nature. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”
Instead, carbon nanotube circuits and RRAM memory can be fabricated
at much lower temperatures: below 200 C. “This means they can be built
up in layers without harming the circuits beneath,” says Shulaker.
Overcoming communication and computing bottlenecks
As applications analyze increasingly massive volumes of data, the
limited rate at which data can be moved between different chips is
creating a critical communication “bottleneck.” And with limited real
estate on increasingly miniaturized chips, there is not enough room to
place chips side-by-side.
At the same time, embedded intelligence in areas ranging from
autonomous driving to personalized medicine is now generating huge
amounts of data, but silicon transistors are no longer improving at the
historic rate that they have for decades.
Instead, three-dimensional integration is the most promising approach
to continue the technology-scaling path set forth by Moore’s law,
allowing an increasing number of devices to be integrated per unit
volume, according to Jan Rabaey, a professor of electrical engineering
and computer science at the University of California at Berkeley, who
was not involved in the research.
Three-dimensional integration “leads to a fundamentally different
perspective on computing architectures, enabling an intimate
interweaving of memory and logic,” he says. “These structures may be
particularly suited for alternative learning-based computational
paradigms such as brain-inspired systems and deep neural nets, and the
approach presented by the authors is definitely a great first step in
that direction.”
The new 3D design provides several benefits for future computing systems, including:
Logic circuits made from carbon nanotubes can be an order of
magnitude more energy-efficient compared to today’s logic made from
silicon.
RRAM memory is denser, faster, and more energy-efficient compared to conventional DRAM (dynamic random-access memory) devices.
The dense through-chip vias (wires) can enable vertical connectivity
that is 1,000 times more dense than conventional packaging and
chip-stacking solutions allow, which greatly improves the data
communication bandwidth between vertically stacked functional layers.
For example, each sensor in the top layer can connect directly to its
respective underlying memory cell with an inter-layer via. This enables
the sensors to write their data in parallel directly into memory and at
high speed.
The design is compatible in both fabrication and design with today’s CMOS silicon infrastructure.
Shulaker next plans to work with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system.
This work was funded by the Defense Advanced Research Projects
Agency, the National Science Foundation, Semiconductor Research
Corporation, STARnet SONIC, and member companies of the Stanford SystemX
Alliance.
* As a working-prototype demonstration of the potential of the
technology, the researchers took advantage of the ability of carbon
nanotubes to also act as sensors. On the top layer of the chip, they
placed more than 1 million carbon nanotube-based sensors, which they
used to detect and classify ambient gases for detecting signs of disease
by sensing particular compounds in a patient’s breath, says Shulaker.
By layering sensing, data storage, and computing, the chip was able to
measure each of the sensors in parallel, and then write directly into
its memory, generating huge bandwidth in just one device, according to
Shulaker. The top layer could be replaced with additional computation or
data storage subsystems, or with other forms of input/output, he
explains.
** Previous R&D in 3D chip technologies and their limitations are covered here,
noting that “in general, 3D integration is a broad term that includes
such technologies as 3D wafer-level packaging (3DWLP); 2.5D and 3D
interposer-based integration; 3D stacked ICs (3D-SICs), monolithic 3D
ICs; 3D heterogeneous integration; and 3D systems integration.” The new
Stanford-MIT nanosystem design significantly expands this definition.
Abstract of Three-dimensional integration of nanotechnologies for computing and data storage on a single chip
The computing demands of future data-intensive applications will
greatly exceed the capabilities of current electronics, and are unlikely
to be met by isolated improvements in transistors, data storage
technologies or integrated circuit architectures alone. Instead,
transformative nanosystems, which use new nanotechnologies to
simultaneously realize improved devices and new integrated circuit
architectures, are required. Here we present a prototype of such a
transformative nanosystem. It consists of more than one million
resistive random-access memory cells and more than two million
carbon-nanotube field-effect transistors—promising new nanotechnologies
for use in energy-efficient digital logic circuits and for dense data
storage—fabricated on vertically stacked layers in a single chip. Unlike
conventional integrated circuit architectures, the layered fabrication
realizes a three-dimensional integrated circuit architecture with
fine-grained and dense vertical connectivity between layers of
computing, data storage, and input and output (in this instance,
sensing). As a result, our nanosystem can capture massive amounts of
data every second, store it directly on-chip, perform in situ
processing of the captured data, and produce ‘highly processed’
information. As a working prototype, our nanosystem senses and
classifies ambient gases. Furthermore, because the layers are fabricated
on top of silicon logic circuitry, our nanosystem is compatible with
existing infrastructure for silicon-based technologies. Such complex
nano-electronic systems will be essential for future high-performance
and highly energy-efficient electronic systems.
In philosophy, rationalism is the epistemological view that "regards reason as the chief source and test of knowledge" or "any view appealing to reason as a source of knowledge or justification". More formally, rationalism is defined as a methodology or a theory "in which the criterion of the truth is not sensory but intellectual and deductive".
In an old controversy, rationalism was opposed to empiricism,
where the rationalists believed that reality has an intrinsically
logical structure. Because of this, the rationalists argued that certain
truths exist and that the intellect can directly grasp these truths.
That is to say, rationalists asserted that certain rational principles
exist in logic, mathematics, ethics, and metaphysics
that are so fundamentally true that denying them causes one to fall
into contradiction. The rationalists had such a high confidence in
reason that empirical proof and physical evidence were regarded as
unnecessary to ascertain certain truths – in other words, "there are
significant ways in which our concepts and knowledge are gained
independently of sense experience".[6]
Different degrees of emphasis on this method or theory lead to a
range of rationalist standpoints, from the moderate position "that
reason has precedence over other ways of acquiring knowledge" to the
more extreme position that reason is "the unique path to knowledge".[7] Given a pre-modern understanding of reason, rationalism is identical to philosophy, the Socratic life of inquiry, or the zetetic (skeptical)
clear interpretation of authority (open to the underlying or essential
cause of things as they appear to our sense of certainty). In recent
decades, Leo Strauss
sought to revive "Classical Political Rationalism" as a discipline that
understands the task of reasoning, not as foundational, but as maieutic.
In the past, particularly in the
17th and 18th centuries, the term 'rationalist' was often used to refer
to free thinkers of an anti-clerical and anti-religious outlook, and for
a time the word acquired a distinctly pejorative force (thus in 1670
Sanderson spoke disparagingly of 'a mere rationalist, that is to say in
plain English an atheist of the late edition...'). The use of the label
'rationalist' to characterize a world outlook which has no place for the
supernatural is becoming less popular today; terms like 'humanist' or 'materialist' seem largely to have taken its place. But the old usage still survives.
Philosophical usage
Rationalism is often contrasted with empiricism. Taken very broadly these views are not mutually exclusive, since a philosopher can be both rationalist and empiricist.[4] Taken to extremes, the empiricist view holds that all ideas come to us a posteriori,
that is to say, through experience; either through the external senses
or through such inner sensations as pain and gratification. The
empiricist essentially believes that knowledge is based on or derived
directly from experience. The rationalist believes we come to knowledge a priori – through the use of logic – and is thus independent of sensory experience. In other words, as Galen Strawson
once wrote, "you can see that it is true just lying on your couch. You
don't have to get up off your couch and go outside and examine the way
things are in the physical world. You don't have to do any science."[11]
Between both philosophies, the issue at hand is the fundamental source
of human knowledge and the proper techniques for verifying what we think
we know. Whereas both philosophies are under the umbrella of epistemology, their argument lies in the understanding of the warrant, which is under the wider epistemic umbrella of the theory of justification.
Theory of justification
The theory of justification is the part of epistemology that attempts to understand the justification of propositions and beliefs. Epistemologists are concerned with various epistemic features of belief, which include the ideas of justification, warrant, rationality, and probability.
Of these four terms, the term that has been most widely used and
discussed by the early 21st century is "warrant". Loosely speaking,
justification is the reason that someone (probably) holds a belief.
If "A" makes a claim, and "B" then casts doubt on it, "A"'s next
move would normally be to provide justification. The precise method one
uses to provide justification is where the lines are drawn between
rationalism and empiricism (among other philosophical views). Much of
the debate in these fields are focused on analyzing the nature of knowledge and how it relates to connected notions such as truth, belief, and justification.
Thesis of rationalism
At
its core, rationalism consists of three basic claims. For one to
consider themselves a rationalist, they must adopt at least one of these
three claims: The Intuition/Deduction Thesis, The Innate Knowledge
Thesis, or The Innate Concept Thesis. In addition, rationalists can
choose to adopt the claims of Indispensability of Reason and or the
Superiority of Reason – although one can be a rationalist without
adopting either thesis.
The Intuition/Deduction Thesis
Rationale: "Some propositions in a particular subject area, S, are
knowable by us by intuition alone; still others are knowable by being
deduced from intuited propositions."[12]
Generally speaking, intuition is a priori
knowledge or experiential belief characterized by its immediacy; a form
of rational insight. We simply "see" something in such a way as to give
us a warranted belief. Beyond that, the nature of intuition is hotly
debated.
In the same way, generally speaking, deduction is the process of reasoning from one or more general premises to reach a logically certain conclusion. Using valid arguments, we can deduce from intuited premises.
For example, when we combine both concepts, we can intuit that
the number three is prime and that it is greater than two. We then
deduce from this knowledge that there is a prime number greater than
two. Thus, it can be said that intuition and deduction combined to
provide us with a priori knowledge – we gained this knowledge independently of sense experience.
Empiricists such as David Hume have been willing to accept this thesis for describing the relationships among our own concepts.[12] In this sense, empiricists argue that we are allowed to intuit and deduce truths from knowledge that has been obtained a posteriori.
By injecting different subjects into the Intuition/Deduction
thesis, we are able to generate different arguments. Most rationalists
agree mathematics is knowable by applying the intuition and deduction. Some go further to include ethical truths into the category of things knowable by intuition and deduction. Furthermore, some rationalists also claim metaphysics is knowable in this thesis.
In addition to different subjects, rationalists sometimes vary
the strength of their claims by adjusting their understanding of the
warrant. Some rationalists understand warranted beliefs to be beyond
even the slightest doubt; others are more conservative and understand
the warrant to be belief beyond a reasonable doubt.
Rationalists also have different understanding and claims
involving the connection between intuition and truth. Some rationalists
claim that intuition is infallible and that anything we intuit to be
true is as such. More contemporary rationalists accept that intuition is
not always a source of certain knowledge – thus allowing for the
possibility of a deceiver who might cause the rationalist to intuit a
false proposition in the same way a third party could cause the
rationalist to have perceptions of nonexistent objects.
Naturally, the more subjects the rationalists claim to be
knowable by the Intuition/Deduction thesis, the more certain they are of
their warranted beliefs, and the more strictly they adhere to the
infallibility of intuition, the more controversial their truths or
claims and the more radical their rationalism.[12]
To argue in favor of this thesis, Gottfried Wilhelm Leibniz,
a prominent German philosopher, says, "The senses, although they are
necessary for all our actual knowledge, are not sufficient to give us
the whole of it, since the senses never give anything but instances,
that is to say particular or individual truths. Now all the instances
which confirm a general truth, however numerous they may be, are not
sufficient to establish the universal necessity of this same truth, for
it does not follow that what happened before will happen in the same way
again. … From which it appears that necessary truths, such as we find
in pure mathematics, and particularly in arithmetic and geometry, must
have principles whose proof does not depend on instances, nor
consequently on the testimony of the senses, although without the senses
it would never have occurred to us to think of them…"[13]
The Innate Knowledge Thesis
Rationale: "We have knowledge of some truths in a particular subject area, S, as part of our rational nature."[14]
The Innate Knowledge thesis is similar to the Intuition/Deduction thesis in the regard that both theses claim knowledge is gained a priori.
The two theses go their separate ways when describing how that
knowledge is gained. As the name, and the rationale, suggests, the
Innate Knowledge thesis claims knowledge is simply part of our rational
nature. Experiences can trigger a process that allows this knowledge to
come into our consciousness, but the experiences don't provide us with
the knowledge itself. The knowledge has been with us since the beginning
and the experience simply brought into focus, in the same way a
photographer can bring the background of a picture into focus by
changing the aperture of the lens. The background was always there, just
not in focus.
This thesis targets a problem with the nature of inquiry originally postulated by Plato in Meno.
Here, Plato asks about inquiry; how do we gain knowledge of a theorem
in geometry? We inquire into the matter. Yet, knowledge by inquiry seems
impossible.[15]
In other words, "If we already have the knowledge, there is no place
for inquiry. If we lack the knowledge, we don't know what we are seeking
and cannot recognize it when we find it. Either way we cannot gain
knowledge of the theorem by inquiry. Yet, we do know some theorems."[14] The Innate Knowledge thesis offers a solution to this paradox. By claiming that knowledge is already with us, either consciously or unconsciously,
a rationalist claims we don't really "learn" things in the traditional
usage of the word, but rather that we simply bring to light what we
already know.
The Innate Concept Thesis
Rationale: "We have some of the concepts we employ in a particular subject area, S, as part of our rational nature."[16]
Similar to the Innate Knowledge thesis, the Innate Concept thesis
suggests that some concepts are simply part of our rational nature.
These concepts are a priori
in nature and sense experience is irrelevant to determining the nature
of these concepts (though, sense experience can help bring the concepts
to our conscious mind).
Some philosophers, such as John Locke (who is considered one of the most influential thinkers of the Enlightenment and an empiricist) argue that the Innate Knowledge thesis and the Innate Concept thesis are the same.[17] Other philosophers, such as Peter Carruthers,
argue that the two theses are distinct from one another. As with the
other theses covered under the umbrella of rationalism, the more types
and greater number of concepts a philosopher claims to be innate, the
more controversial and radical their position; "the more a concept seems
removed from experience and the mental operations we can perform on
experience the more plausibly it may be claimed to be innate. Since we
do not experience perfect triangles but do experience pains, our concept
of the former is a more promising candidate for being innate than our
concept of the latter.[16]
In his book, Meditations on First Philosophy,[18]René Descartes postulates three classifications for our ideas
when he says, "Among my ideas, some appear to be innate, some to be
adventitious, and others to have been invented by me. My understanding
of what a thing is, what truth is, and what thought is, seems to derive
simply from my own nature. But my hearing a noise, as I do now, or
seeing the sun, or feeling the fire, comes from things which are located
outside me, or so I have hitherto judged. Lastly, sirens, hippogriffs and the like are my own invention."[19]
Adventitious ideas are those concepts that we gain through sense
experiences, ideas such as the sensation of heat, because they originate
from outside sources; transmitting their own likeness rather than
something else and something you simply cannot will away. Ideas invented by us, such as those found in mythology, legends, and fairy tales are created by us from other ideas we possess. Lastly, innate ideas, such as our ideas of perfection, are those ideas we have as a result of mental processes that are beyond what experience can directly or indirectly provide.
Gottfried Wilhelm Leibniz
defends the idea of innate concepts by suggesting the mind plays a role
in determining the nature of concepts, to explain this, he likens the
mind to a block of marble in the New Essays on Human Understanding,
"This is why I have taken as an illustration a block of veined marble,
rather than a wholly uniform block or blank tablets, that is to say what
is called tabula rasa in the language of the philosophers. For if the
soul were like those blank tablets, truths would be in us in the same
way as the figure of Hercules is in a block of marble, when the marble
is completely indifferent whether it receives this or some other figure.
But if there were veins in the stone which marked out the figure of
Hercules rather than other figures, this stone would be more determined
thereto, and Hercules would be as it were in some manner innate in it,
although labour would be needed to uncover the veins, and to clear them
by polishing, and by cutting away what prevents them from appearing. It
is in this way that ideas and truths are innate in us, like natural
inclinations and dispositions, natural habits or potentialities, and not
like activities, although these potentialities are always accompanied
by some activities which correspond to them, though they are often
imperceptible."[20]
The other two theses
The
three aforementioned theses of Intuition/Deduction, Innate Knowledge,
and Innate Concept are the cornerstones of rationalism. To be considered
a rationalist, one must adopt at least one of those three claims. The
following two theses are traditionally adopted by rationalists, but they
aren't essential to the rationalist's position.
The Indispensability of Reason Thesis has the following rationale, "The knowledge we gain in subject area, S, by intuition and deduction, as well as the ideas and instances of knowledge in S that are innate to us, could not have been gained by us through sense experience."[3] In short, this thesis claims that experience cannot provide what we gain from reason.
The Superiority of Reason Thesis has the following rationale, '"The knowledge we gain in subject area S by intuition and deduction or have innately is superior to any knowledge gained by sense experience".[3] In other words, this thesis claims reason is superior to experience as a source for knowledge.
In addition to the following claims, rationalists often adopt
similar stances on other aspects of philosophy. Most rationalists reject
skepticism for the areas of knowledge they claim are knowable a priori.
Naturally, when you claim some truths are innately known to us, one
must reject skepticism in relation to those truths. Especially for
rationalists who adopt the Intuition/Deduction thesis, the idea of
epistemic foundationalism tends to crop up. This is the view that we
know some truths without basing our belief in them on any others and
that we then use this foundational knowledge to know more truths.[3]
Background
Rationalism
- as an appeal to human reason as a way of obtaining knowledge - has a
philosophical history dating from antiquity. The analytical nature of much of philosophical enquiry, the awareness of apparently a priori
domains of knowledge such as mathematics, combined with the emphasis of
obtaining knowledge through the use of rational faculties (commonly
rejecting, for example, direct revelation) have made rationalist themes very prevalent in the history of philosophy.
Since the Enlightenment, rationalism is usually associated with
the introduction of mathematical methods into philosophy as seen in the
works of Descartes, Leibniz, and Spinoza.[5] This is commonly called continental rationalism, because it was predominant in the continental schools of Europe, whereas in Britain empiricism dominated.
Even then, the distinction between rationalists and empiricists
was drawn at a later period and would not have been recognized by the
philosophers involved. Also, the distinction between the two
philosophies is not as clear-cut as is sometimes suggested; for example,
Descartes and Locke have similar views about the nature of human ideas.[6]
Proponents of some varieties of rationalism argue that, starting with foundational basic principles, like the axioms of geometry, one could deductively derive the rest of all possible knowledge. The philosophers who held this view most clearly were Baruch Spinoza and Gottfried Leibniz,
whose attempts to grapple with the epistemological and metaphysical
problems raised by Descartes led to a development of the fundamental
approach of rationalism. Both Spinoza and Leibniz asserted that, in principle,
all knowledge, including scientific knowledge, could be gained through
the use of reason alone, though they both observed that this was not
possible in practice for human beings except in specific areas such as mathematics. On the other hand, Leibniz admitted in his book Monadology that "we are all mere Empirics in three fourths of our actions."[7]
History
Rationalist philosophy from antiquity
Although
rationalism in its modern form post-dates antiquity, philosophers from
this time laid down the foundations of rationalism.[citation needed] In particular, the understanding that we may be aware of knowledge available only through the use of rational thought.[citation needed]
Pythagoras (570–495 BCE)
Pythagoras was one of the first Western philosophers to stress rationalist insight.[21] He is often revered as a great mathematician, mystic and scientist, but he is best known for the Pythagorean theorem,
which bears his name, and for discovering the mathematical relationship
between the length of strings on lute and the pitches of the notes.
Pythagoras "believed these harmonies reflected the ultimate nature of
reality. He summed up the implied metaphysical rationalism in the words
"All is number". It is probable that he had caught the rationalist's
vision, later seen by Galileo (1564–1642), of a world governed throughout by mathematically formulable laws".[21] It has been said that he was the first man to call himself a philosopher, or lover of wisdom.[22]
Plato (427–347 BCE)
Plato held rational insight to a very high standard, as is seen in his works such as Meno and The Republic. He taught on the Theory of Forms (or the Theory of Ideas)[23][24][25] which asserts that the highest and most fundamental kind of reality is not the material world of change known to us through sensation, but rather the abstract, non-material (but substantial) world of forms (or ideas).[26] For Plato, these forms were accessible only to reason and not to sense.[21] In fact, it is said that Plato admired reason, especially in geometry, so highly that he had the phrase "Let no one ignorant of geometry enter" inscribed over the door to his academy.[27]
Aristotle (384–322 BCE)
Aristotle's main contribution to rationalist thinking was the use of syllogistic
logic and its use in argument. Aristotle defines syllogism as "a
discourse in which certain (specific) things having been supposed,
something different from the things supposed results of necessity
because these things are so."[28] Despite this very general definition, Aristotle limits himself to categorical syllogisms which consist of three categorical propositions in his work Prior Analytics.[29] These included categorical modal syllogisms.[30]
Post-Aristotle
Although
the three great Greek philosophers disagreed with one another on
specific points, they all agreed that rational thought could bring to
light knowledge that was self-evident – information that humans
otherwise couldn't know without the use of reason. After Aristotle's
death, Western rationalistic thought was generally characterized by its
application to theology, such as in the works of Augustine, the Islamic philosopher Avicenna and Jewish philosopher and theologian Maimonides. One notable event in the Western timeline was the philosophy of Thomas Aquinas who attempted to merge Greek rationalism and Christian revelation in the thirteenth-century.[21]
Descartes was the first of the modern rationalists and has been dubbed the 'Father of Modern Philosophy.' Much subsequent Western philosophy is a response to his writings,[33][34][35] which are studied closely to this day.
Descartes thought that only knowledge of eternal truths –
including the truths of mathematics, and the epistemological and
metaphysical foundations of the sciences – could be attained by reason
alone; other knowledge, the knowledge of physics, required experience of
the world, aided by the scientific method. He also argued that although dreams appear as real as sense experience,
these dreams cannot provide persons with knowledge. Also, since
conscious sense experience can be the cause of illusions, then sense
experience itself can be doubtable. As a result, Descartes deduced that a
rational pursuit of truth should doubt every belief about sensory
reality. He elaborated these beliefs in such works as Discourse on Method, Meditations on First Philosophy, and Principles of Philosophy. Descartes developed a method to attain truths according to which nothing that cannot be recognised by the intellect (or reason)
can be classified as knowledge. These truths are gained "without any
sensory experience," according to Descartes. Truths that are attained by
reason are broken down into elements that intuition can grasp, which,
through a purely deductive process, will result in clear truths about
reality.
Descartes therefore argued, as a result of his method, that
reason alone determined knowledge, and that this could be done
independently of the senses. For instance, his famous dictum, cogito ergo sum or "I think, therefore I am", is a conclusion reached a priori
i.e., prior to any kind of experience on the matter. The simple meaning
is that doubting one's existence, in and of itself, proves that an "I"
exists to do the thinking. In other words, doubting one's own doubting
is absurd.[36]
This was, for Descartes, an irrefutable principle upon which to ground
all forms of other knowledge. Descartes posited a metaphysical dualism, distinguishing between the substances of the human body ("res extensa") and the mind or soul ("res cogitans"). This crucial distinction would be left unresolved and lead to what is known as the mind-body problem, since the two substances in the Cartesian system are independent of each other and irreducible.
Baruch Spinoza (1632–1677)
The philosophy of Baruch Spinoza is a systematic, logical, rational philosophy developed in seventeenth-century Europe.[37][38][39]
Spinoza's philosophy is a system of ideas constructed upon basic
building blocks with an internal consistency with which he tried to
answer life's major questions and in which he proposed that "God exists
only philosophically."[39][40] He was heavily influenced by Descartes,[41]Euclid[40] and Thomas Hobbes,[41] as well as theologians in the Jewish philosophical tradition such as Maimonides.[41] But his work was in many respects a departure from the Judeo-Christian tradition. Many of Spinoza's ideas continue to vex thinkers today and many of his principles, particularly regarding the emotions, have implications for modern approaches to psychology. To this day, many important thinkers have found Spinoza's "geometrical method"[39] difficult to comprehend: Goethe admitted that he found this concept confusing[citation needed]. His magnum opus, Ethics, contains unresolved obscurities and has a forbidding mathematical structure modeled on Euclid's geometry.[40] Spinoza's philosophy attracted believers such as Albert Einstein[42] and much intellectual attention.[43][44][45][46][47]
Gottfried Leibniz (1646–1716)
Leibniz was the last of the great Rationalists who contributed heavily to other fields such as metaphysics, epistemology, logic, mathematics, physics, jurisprudence, and the philosophy of religion; he is also considered to be one of the last "universal geniuses".[48]
He did not develop his system, however, independently of these
advances. Leibniz rejected Cartesian dualism and denied the existence of
a material world. In Leibniz's view there are infinitely many simple
substances, which he called "monads" (possibly taking the term from the work of Anne Conway).
Leibniz developed his theory of monads in response to both Descartes and Spinoza,
because the rejection of their visions forced him to arrive at his own
solution. Monads are the fundamental unit of reality, according to
Leibniz, constituting both inanimate and animate objects. These units of
reality represent the universe, though they are not subject to the laws
of causality or space (which he called "well-founded phenomena"). Leibniz, therefore, introduced his principle of pre-established harmony to account for apparent causality in the world.
Immanuel Kant (1724–1804)
Kant is one of the central figures of modern philosophy,
and set the terms by which all subsequent thinkers have had to grapple.
He argued that human perception structures natural laws, and that
reason is the source of morality. His thought continues to hold a major
influence in contemporary thought, especially in fields such as
metaphysics, epistemology, ethics, political philosophy, and aesthetics.[49]
Kant named his brand of epistemology "Transcendental Idealism", and he first laid out these views in his famous work The Critique of Pure Reason.
In it he argued that there were fundamental problems with both
rationalist and empiricist dogma. To the rationalists he argued,
broadly, that pure reason is flawed when it goes beyond its limits and
claims to know those things that are necessarily beyond the realm of all
possible experience: the existence of God,
free will, and the immortality of the human soul. Kant referred to
these objects as "The Thing in Itself" and goes on to argue that their
status as objects beyond all possible experience by definition means we
cannot know them. To the empiricist he argued that while it is correct
that experience is fundamentally necessary for human knowledge, reason
is necessary for processing that experience into coherent thought. He
therefore concludes that both reason and experience are necessary for
human knowledge. In the same way, Kant also argued that it was wrong to
regard thought as mere analysis. "In Kant's views, a priori
concepts do exist, but if they are to lead to the amplification of
knowledge, they must be brought into relation with empirical data".[50]
Contemporary rationalism
Rationalism has become a rarer label tout court of philosophers today; rather many different kinds of specialised rationalisms are identified. For example, Robert Brandom has appropriated the terms rationalist expressivism and rationalist pragmatism as labels for aspects of his programme in Articulating Reasons, and identified linguistic rationalism,
the claim that the content of propositions "are essentially what can
serve as both premises and conclusions of inferences", as a key thesis
of Wilfred Sellars.[51]
Criticism
Rationalism was criticized by William James
for being out of touch with reality. James also criticized rationalism
for representing the universe as a closed system, which contrasts to his
view that the universe is an open system.