Biological synapse structure (credit: Thomas Splettstoesser/CC)
MIT engineers have designed a new artificial synapse made from
silicon germanium that can precisely control the strength of an electric
current flowing across it.
In simulations, the researchers found that the chip and its synapses
could be used to recognize samples of handwriting with 95 percent
accuracy. The engineers say the new design, published today (Jan. 22) in
the journal Nature Materials, is a major step toward building
portable, low-power neuromorphic chips for use in pattern recognition
and other machine-learning tasks.
Controlling the flow of ions: the challenge
Researchers in the emerging field of “neuromorphic computing” have
attempted to design computer chips that work like the human brain. The
idea is to apply a voltage across layers that would cause ions
(electrically charged atoms) to move in a switching medium (synapse-like
space) to create conductive filaments in a manner that’s similar to how
the “weight” (connection strength) of a synapse changes.
There are more than 100 trillion synapses (in a typical human brain)
that mediate neuron signaling in the brain, strengthening some neural
connections while pruning (weakening) others — a process that enables
the brain to recognize patterns, remember facts, and carry out other
learning tasks, all at lightning speeds.
Instead of carrying out computations based on binary, on/off
signaling, like current digital chips, the elements of a “brain on a
chip” would work in an analog fashion, exchanging a gradient of signals,
or “weights” — much like neurons that activate in various ways
(depending on the type and number of ions that flow across a synapse).
But it’s been difficult to control the flow of ions in existing
synapse designs. These have multiple paths that make it difficult to
predict where ions will make it through, according to research team
leader Jeehwan Kim, PhD, an assistant professor in the departments of
Mechanical Engineering and Materials Science and Engineering, a
principal investigator in MIT’s Research Laboratory of Electronics and
Microsystems Technology Laboratories.
“Once you apply some voltage to represent some data with your
artificial neuron, you have to erase and be able to write it again in
the exact same way,” Kim says. “But in an amorphous solid, when you
write again, the ions go in different directions because there are lots
of defects. This stream is changing, and it’s hard to control. That’s
the biggest problem — nonuniformity of the artificial synapse.”
Epitaxial random access memory (epiRAM)
(Left)
Cross-sectional transmission electron microscope image of 60 nm
silicon-germanium (SiGe) crystal grown on a silicon substrate (diagonal
white lines represent candidate dislocations). Scale bar: 25 nm. (Right)
Cross-sectional scanning electron microscope image of an epiRAM device
with titanium (Ti)–gold (Au) and silver (Ag)–palladium (Pd) layers. Scale bar: 100 nm. (credit: Shinhyun Choi et al./Nature Materials)
So instead of using amorphous materials as an artificial synapse, Kim
and his colleagues created an new “epitaxial random access memory”
(epiRAM) design.
They started with a wafer of silicon. They then grew a similar
pattern of silicon germanium — a material used commonly in transistors —
on top of the silicon wafer. Silicon germanium’s lattice is slightly
larger than that of silicon, and Kim found that together, the two
perfectly mismatched materials could form a funnel-like dislocation,
creating a single path through which ions can predictably flow.*
This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.
Testing the ability to recognize samples of handwriting
As a test, Kim and his team explored how the epiRAM device would
perform if it were to carry out an actual learning task: recognizing
samples of handwriting — which researchers consider to be a practical
test for neuromorphic chips. Such chips would consist of artificial
“neurons” connected to other “neurons” via filament-based artificial
“synapses.”
Image-recognition
simulation. (Left) A 3-layer multilayer-perception neural network with
black and white input signal for each layer in algorithm level. The
inner product (summation) of input neuron signal vector and first
synapse array vector is transferred after activation and binarization as
input vectors of second synapse arrays. (Right) Circuit block diagram
of hardware implementation showing a synapse layer composed of epiRAM
crossbar arrays and the peripheral circuit. (credit: Shinhyun Choi et
al./Nature Materials)
They ran a computer simulation of an artificial neural network
consisting of three sheets of neural layers connected via two layers of
artificial synapses, based on measurements from their actual
neuromorphic chip. They fed into their simulation tens of thousands of
samples from the MNIST handwritten recognition dataset**, commonly used by neuromorphic designers.
They found that their neural network device recognized handwritten
samples 95.1 percent of the time — close to the 97 percent accuracy of
existing software algorithms running on large computers.
A chip to replace a supercomputer
The team is now in the process of fabricating a real working
neuromorphic chip that can carry out handwriting-recognition tasks.
Looking beyond handwriting, Kim says the team’s artificial synapse
design will enable much smaller, portable neural network devices that
can perform complex computations that are currently only possible with
large supercomputers.
“Ultimately, we want a chip as big as a fingernail to replace one big
supercomputer,” Kim says. “This opens a stepping stone to produce real
artificial intelligence hardware.”
This research was supported in part by the National Science
Foundation. Co-authors included researchers at Arizona State University.
* They applied voltage to each synapse and found that all
synapses exhibited about the same current, or flow of ions, with about a
4 percent variation between synapses — a much more uniform performance
compared with synapses made from amorphous material. They also tested a
single synapse over multiple trials, applying the same voltage over 700
cycles, and found the synapse exhibited the same current, with just 1
percent variation from cycle to cycle.
** The MNIST (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems and for training and testing in the field of machine learning. It contains 60,000 training images and 10,000 testing images.
Abstract of SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations
Although several types of architecture combining memory cells and
transistors have been used to demonstrate artificial synaptic arrays,
they usually present limited scalability and high power consumption.
Transistor-free analog switching devices may overcome these limitations,
yet the typical switching process they rely on—formation of filaments
in an amorphous medium—is not easily controlled and hence hampers the
spatial and temporal reproducibility of the performance. Here, we
demonstrate analog resistive switching devices that possess desired
characteristics for neuromorphic computing networks with minimal
performance variations using a single-crystalline SiGe layer epitaxially
grown on Si as a switching medium. Such epitaxial random access
memories utilize threading dislocations in SiGe to confine metal
filaments in a defined, one-dimensional channel. This confinement
results in drastically enhanced switching uniformity and long
retention/high endurance with a high analog on/off ratio. Simulations
using the MNIST handwritten recognition data set prove that epitaxial
random access memories can operate with an online learning accuracy of
95.1%.
Speciesism (/ˈspiːʃiːˌzɪzəm, -siːˌzɪz-/) involves the assignment of different values, rights, or special consideration to individuals solely on the basis of their species membership. The term is sometimes used by animal rights advocates, who argue that speciesism is a prejudice similar to racism or sexism,
in that the treatment of individuals is predicated on group membership
and morally irrelevant physical differences. Their claim is that species
membership has no moral significance.
(DJS view): I regard the campaign against meat eating as illiogical and misguided. In fact, unlike humans, the cattle, pigs, etc., that we consume cannot comprehend that they are being raised for food or are fated to be slaughtered. In fact, if we did end meat eating, they would never have the opportunity to live at all, which I regard as worse than dying, something we all must do and doesn't make us wish we had never lived. As long as the animals are treated humanely while alive (something I will gladly pay for), if anything we are doing them a favor by eating their flesh
Ironically, it is the anthropomorphizing of other species which leads people to this error. It is like condemning human ownership and mastery over dogs as slavery; in fact, not only does the dog not comprehend the concepts involved, a kind master is the best thing that can happen to it.
The term has not been used uniformly, but broadly embraces two ideas. It usually refers to "human speciesism" (human supremacism), the exclusion of all nonhuman animals from the rights, freedoms, and protections afforded to humans.
It can also refer to the more general idea of assigning value to a
being on the basis of species membership alone, so that
"human–chimpanzee speciesism" would involve human beings favouring
rights for chimpanzees over rights for dogs, because of human–chimpanzee
similarities.
The term speciesism, and the argument that it is simply a
prejudice, first appeared in 1970 in a privately printed pamphlet
written by British psychologist Richard D. Ryder. Ryder was a member of a group of intellectuals in Oxford, England, the nascent animal rights community, now known as the Oxford Group.
One of the group's activities was distributing pamphlets about areas of
concern; the pamphlet titled "Speciesism" was written to protest
against animal experimentation.[5]
Ryder argued in the pamphlet that "[s]ince Darwin, scientists
have agreed that there is no 'magical' essential difference between
humans and other animals, biologically-speaking. Why then do we make an
almost total distinction morally? If all organisms are on one physical
continuum, then we should also be on the same moral continuum." He wrote
that, at that time in the UK, 5,000,000 animals were being used each
year in experiments, and that attempting to gain benefits for our own
species through the mistreatment of others was "just 'speciesism' and as
such it is a selfish emotional argument rather than a reasoned one".[6] Ryder used the term again in an essay, "Experiments on Animals", in Animals, Men and Morals
(1971), a collection of essays on animal rights edited by philosophy
graduate students Stanley and Roslind Godlovitch and John Harris, who
were also members of the Oxford Group. Ryder wrote:
In as much as both "race" and "species" are vague terms
used in the classification of living creatures according, largely, to
physical appearance, an analogy can be made between them. Discrimination
on grounds of race, although most universally condoned two centuries
ago, is now widely condemned. Similarly, it may come to pass that
enlightened minds may one day abhor "speciesism" as much as they now
detest "racism." The illogicality in both forms of prejudice is of an
identical sort. If it is accepted as morally wrong to deliberately
inflict suffering upon innocent human creatures, then it is only logical
to also regard it as wrong to inflict suffering on innocent individuals
of other species. ... The time has come to act upon this logic.[7]
Those who claim that speciesism is unfair to non-human species have
often argued their case by invoking mammals and chickens in the context
of research or farming.[8][9][10]
However, there is not yet a clear definition or line agreed upon by a
significant segment of the movement as to which species are to be
treated equally with humans or in some ways additionally protected:
mammals, birds, reptiles, arthropods, insects, bacteria, etc.
Spread of the idea
Peter Singer popularized the idea in Animal Liberation (1975)
The term was popularized by the Australian philosopher Peter Singer in his book Animal Liberation (1975). Singer had known Ryder from his own time as a graduate philosophy student at Oxford.[11] He credited Ryder with having coined the term and used it in the title of his book's fifth chapter: "Man's Dominion ... a short history of speciesism",
defining it as "a prejudice or attitude of bias in favour of the
interests of members of one's own species and against those of members
of other species":
Racists violate the principle of equality by giving
greater weight to the interests of members of their own race when there
is a clash between their interests and the interests of those of another
race. Sexists violate the principle of equality by favouring the
interests of their own sex. Similarly, speciesists allow the interests
of their own species to override the greater interests of members of
other species. The pattern is identical in each case.[12]
Singer argued from a preference-utilitarian perspective, writing that speciesism violates the principle of equal consideration of interests, the idea based on Jeremy Bentham's
principle: "each to count for one, and none for more than one". Singer
argued that, although there may be differences between humans and
nonhumans, they share the capacity to suffer, and we must give equal
consideration to that suffering. Any position that allows similar cases
to be treated in a dissimilar fashion fails to qualify as an acceptable
moral theory. The term caught on; Singer wrote that it was an awkward
word but that he could not think of a better one. It became an entry in
the Oxford English Dictionary in 1985, defined as "discrimination
against or exploitation of animal species by human beings, based on an
assumption of mankind's superiority".[13] In 1994 the Oxford Dictionary of Philosophy
offered a wider definition: "By analogy with racism and sexism, the
improper stance of refusing respect to the lives, dignity, or needs of
animals of other than the human species."[14]
The Trial of Bill Burns (1838) in London showing Richard Martin (MP for Galway) in court with a donkey beaten by his owner, leading to the world's first known conviction for animal cruelty
Paola Cavalieri writes that the current humanist
paradigm is that only human beings are members of the moral community,
and that all are worthy of equal protection. Species membership, she
writes, is ipso facto moral membership. The paradigm has an
inclusive side (all human beings deserve equal protection) and an
exclusive one (only human beings have that status).[3]
She writes that it is not only philosophers who have difficulty with this concept.[3]Richard Rorty
(1931–2007) argued that most human beings – those outside what he
called our "Eurocentric human rights culture" – are unable to understand
why membership of a species would in itself be sufficient for inclusion
in the moral community: "Most people live in a world in which it would
be just too risky – indeed, it would often be insanely dangerous – to
let one's sense of moral community stretch beyond one's family, clan or
tribe." Rorty wrote:
Such people are morally offended by the suggestion
that they should treat someone who is not kin as if he were a brother,
or a nigger as if he were white, or a queer as if he were normal, or an
infidel as if she were a believer. They are offended by the suggestion
that they treat people whom they do not think of as human as if they
were human. When utilitarians tell them that all pleasures and pains
felt by members of our biological species are equally relevant to moral
deliberation, or when Kantians tell them that the ability to engage in
such deliberation is sufficient for membership in the moral community,
they are incredulous. They rejoin that these philosophers seem oblivious
to blatantly obvious moral distinctions, distinctions that any decent
person will draw.[16]
Much of humanity is similarly offended by the suggestion that the
moral community be extended to nonhumans. Nonhumans do possess some
moral status in many societies, but it generally extends only to
protection against what Cavalieri calls "wanton cruelty".[3]
Anti-speciesists argue that the extension of moral membership to all
humanity, regardless of individual properties such as intelligence,
while denying it to nonhumans, also regardless of individual properties,
is internally inconsistent. According to the argument from marginal cases,
if infants, the senile, the comatose, and the cognitively disabled
(marginal-case human beings) have a certain moral status, then nonhuman
animals must be awarded that status too, since there is no morally
relevant ability that the marginal-case humans have that nonhumans lack.
American legal scholar Steven M. Wise argues that speciesism is a bias as arbitrary as any other. He cites the philosopher R.G. Frey
(1941–2012), a leading animal rights critic, who wrote in 1983 that, if
forced to choose between abandoning experiments on animals and allowing
experiments on "marginal-case" humans, he would choose the latter, "not
because I begin a monster and end up choosing the monstrous, but
because I cannot think of anything at all compelling that cedes all
human life of any quality greater value than animal life of any
quality".[17]
"Discontinuous mind"
Richard Dawkins argues against speciesism as an example of the "discontinuous mind"
Richard Dawkins, the evolutionary biologist, argued against speciesism in The Blind Watchmaker (1986), The Great Ape Project (1993), and The God Delusion (2006), elucidating the connection with evolutionary theory.
He compares former racist attitudes and assumptions to their
present-day speciesist counterparts. In the chapter "The one true tree
of life" in The Blind Watchmaker, he argues that it is not only zoological taxonomy
that is saved from awkward ambiguity by the extinction of intermediate
forms, but also human ethics and law. Dawkins argues that what he calls
the "discontinuous mind" is ubiquitous, dividing the world into units
that reflect nothing but our use of language, and animals into
discontinuous species:[18]
The director of a zoo is entitled to "put down" a
chimpanzee that is surplus to requirements, while any suggestion that he
might "put down" a redundant keeper or ticket-seller would be greeted
with howls of incredulous outrage. The chimpanzee is the property of the
zoo. Humans are nowadays not supposed to be anybody's property, yet the
rationale for discriminating against chimpanzees is seldom spelled out,
and I doubt if there is a defensible rationale at all. Such is the
breathtaking speciesism of our Christian-inspired attitudes, the
abortion of a single human zygote
(most of them are destined to be spontaneously aborted anyway) can
arouse more moral solicitude and righteous indignation than the
vivisection of any number of intelligent adult chimpanzees! ... The only
reason we can be comfortable with such a double standard is that the
intermediates between humans and chimps are all dead.[19]
Dawkins elaborated in a discussion with Singer at The Center for Inquiry
in 2007, when asked whether he continues to eat meat: "It's a little
bit like the position which many people would have held a couple of
hundred years ago over slavery. Where lots of people felt morally uneasy about slavery but went along with it because the whole economy of the South depended upon slavery."[20]
Animal holocaust
David Sztybel
argues in his paper, "Can the Treatment of Animals Be Compared to the
Holocaust?" (2006), that the racism of the Nazis is comparable to the
speciesism inherent in eating meat or using animal by-products,
particularly those produced on factory farms.[8]
Y. Michael Barilan, an Israeli physician, argues that speciesism is not
the same thing as Nazi racism, because the latter extolled the abuser
and condemned the weaker and the abused. He describes speciesism as the
recognition of rights on the basis of group membership, rather than
solely on the basis of moral considerations.[21]
Centrality of consciousness
"Libertarian extension" is the idea that the intrinsic value of nature can be extended beyond sentient beings.[22]
This seeks to apply the principle of individual rights not only to all
animals but also to objects without a nervous system such as trees,
plants, and rocks.[23]
Ryder rejects this argument, writing that "value cannot exist in the
absence of consciousness or potential consciousness. Thus, rocks and
rivers and houses have no interests and no rights of their own. This
does not mean, of course, that they are not of value to us, and to many
other painients, including those who need them as habitats and who would
suffer without them."[24]
Arguments in favor
Philosophical
A common theme in defending speciesism is the argument that humans have the right to exploit other species to defend their own.[25] Philosopher Carl Cohen
argued in 1986: "Speciesism is not merely plausible; it is essential
for right conduct, because those who will not make the morally relevant
distinctions among species are almost certain, in consequence, to
misapprehend their true obligations."[26]
Cohen writes that racism and sexism are wrong because there are no
relevant differences between the sexes or races. Between people and
animals, he argues, there are significant differences; his view is that
animals do not qualify for Kantian personhood, and as such have no rights.[27]
Nel Noddings,
the American feminist, has criticized Singer's concept of speciesism
for being simplistic, and for failing to take into account the context
of species preference, as concepts of racism and sexism have taken into
account the context of discrimination against humans.[28]Peter Staudenmaier has argued that comparisons between speciesism and racism or sexism are trivializing:
The central analogy to the civil rights movement and the
women's movement is trivializing and ahistorical. Both of those social
movements were initiated and driven by members of the dispossessed and
excluded groups themselves, not by benevolent men or white people acting
on their behalf. Both movements were built precisely around the idea of
reclaiming and reasserting a shared humanity in the face of a society
that had deprived it and denied it. No civil rights activist or feminist
ever argued, "We're sentient beings too!" They argued, "We're fully
human too!" Animal liberation doctrine, far from extending this humanist
impulse, directly undermines it.[29]
Another criticism of animal-type anti-speciesism is based on the
distinction between demanding rights one wants and being put into those
one may not want. Many people who are now over 18 but remember their
time as minors as a time when their alleged children's rights was
legalized torture doubt if animal rights do animals any good, especially
since animals cannot even say what they consider to be horrible. A
distinction is made between people who are extrinsically denied their
possibility to say what they think by 18 year limits, psychiatric
diagnoses based on domain-specific
hypotheses, or other constructed laws on one hand, and marginal case
humans intrinsically incapable of opining about their situation on the
other. The former is considered comparable to racism and sexism, the
latter is considered comparable to animals.[30]
This extends to questioning and rejecting the very definition of
"wanton cruelty". One example that has been pointed out is that since we
do not know whether or not animals are aware of death, all ethical considerations on putting animals down are benighted.[31]
Advocates of this way of partly accepting speciesism generally do not
subscribe to arguments about alleged dehumanization or other legalistic
type arguments, and have no problem with accepting possible future
encounters with extraterrestrial intelligence or artificial intelligence as equals.[32][33]
Ayn Rand's Objectivism
holds that humans are the only beings who have what Rand called a
conceptual consciousness, and the ability to reason and develop a moral
system. She argued that humans are therefore the only species entitled
to rights. Randian philosopher Leonard Peikoff
argued: "By its nature and throughout the animal kingdom, life survives
by feeding on life. To demand that man defer to the 'rights' of other species is to deprive man himself of the right to life. This is 'other-ism,' i.e. altruism, gone mad."[34]
The British philosopher, Roger Scruton,
regards the emergence of the animal rights and anti-speciesism movement
as "the strangest cultural shift within the liberal worldview", because
the idea of rights and responsibilities is, he argues, distinctive to
the human condition, and it makes no sense to spread them beyond our own
species. Scruton argues that if animals have rights, then they also
have duties, which animals would routinely violate, with almost all of
them being "habitual law-breakers" and predatory animals such as foxes,
wolves and killer whales being "inveterate murderers" who "should be
permanently locked up". He accuses anti-speciesism advocates of
"pre-scientific" anthropomorphism, attributing traits to animals that are, he says, Beatrix Potter-like, where "only man is vile." It is, he argues, a fantasy, a world of escape.[35]
Religious
The
Rev. John Tuohey, founder of the Providence Center for Health Care
Ethics, writes that the logic behind the anti-speciesism critique is
flawed, and that, although the animal rights movement in the United
States has been influential in slowing animal experimentation, and in
some cases halting particular studies, no one has offered a compelling
argument for species equality.[36]
Some proponents of speciesism believe that animals exist so that
humans may make use of them. They argue that this special status conveys
special rights, such as the right to life, and also unique responsibilities, such as stewardship of the environment. This belief in human exceptionalism is often rooted in the Abrahamic religions, such as the Book of Genesis
1:26: "Then God said, "Let Us make man in Our image, according to Our
likeness; and let them rule over the fish of the sea and over the birds
of the sky and over the cattle and over all the earth, and over every
creeping thing that creeps on the earth." Animal rights advocates argue
that dominion refers to stewardship, not ownership.[37] Jesus Christ taught that a person is worth more than many sparrows.[38] But the Imago Dei may be personhood
itself, although we humans have only achieved efficiencies in educating
and otherwise acculturating humans. Proverbs 12:10 mentions that "The
righteous one takes care of his domestic animals."
Law and policy
Law
The
first major statute addressing animal protection in the United States,
titled "An Act for the More Effectual Prevention of Cruelty to Animals",
was enacted in 1867. It provided the right to incriminate and enforce
protection with regards to animal cruelty. The act, which has since been
revised to suit modern cases state by state, originally addressed such
things as animal neglect, abandonment, torture, fighting, transport,
impound standards, and licensing standards.[39]
Although an animal rights movement had already started as early as the
late 1800s, some of the laws that would shape the way animals would be
treated as industry grew, were enacted around the same time that Richard
Ryder was bringing the notion of Speciesism to the conversation.[40] Legislation was being proposed and passed in the U.S. that would reshape animal welfare in industry and science. Bills such as
Humane Slaughter Act, which was created to alleviate some of the suffering felt by livestock during slaughter, was passed in 1958. Later the Animal Welfare Act of 1966,
passed by President Lyndon B. Johnson, was designed to put much
stricter regulations and supervisions on the handling of animals used in
laboratory experimentation and exhibition but has since been amended
and expanded.[41]
These groundbreaking laws foreshadowed and influenced the shifting
attitudes toward nonhuman animals in their rights to humane treatment
which Richard D. Ryder and Peter Singer would later popularize in the 1970s and 1980s.
Great ape personhood
Great ape personhood is the idea that the attributes of nonhuman great apes
are such that their sentience and personhood should be recognized by
the law, rather than simply protecting them as a group under animal cruelty legislation. Awarding personhood to nonhuman primates would require that their individual interests be taken into account.[42]
MIT’s
miniaturized system can deliver multiple drugs to precise locations in
the brain, also monitor and control neural activity (credit: MIT)
MIT researchers have developed a miniaturized system that can deliver
tiny quantities of medicine to targeted brain regions as small as 1
cubic millimeter, with precise control over how much drug is given. The
goal is to treat diseases that affect specific brain circuits without
interfering with the normal functions of the rest of the brain.*
“We believe this tiny microfabricated device could have tremendous
impact in understanding brain diseases, as well as providing new ways of
delivering biopharmaceuticals and performing biosensing in the brain,”
says Robert Langer, the David H. Koch Institute Professor at MIT and one
of the senior authors of an open-access paper that appears in the Jan.
24 issue of Science Translational Medicine.**
Miniaturized
neural drug delivery system (MiNDS). Top: Miniaturized delivery needle
with multiple fluidic channels for delivering different drugs. Bottom:
scanning electron microscope image of cannula tip for delivering a drug
or optogenetic light (to stimulate neurons) and a tungsten electrode
(yellow dotted area — magnified view in inset) for detecting neural
activity. (credit: Dagdeviren et al., Sci. Transl. Med., adapted by
KurzweilAI)
The researchers used state-of-the-art microfabrication techniques to
construct cannulas (thin tubes) with diameters of about 30 micrometers
(width of a fine human hair) and lengths up to 10 centimeters. These
cannulas are contained within a stainless steel needle with a diameter
of about 150 micrometers. Inside the cannulas are small pumps that can
deliver tiny doses (hundreds of nanoliters***) deep into the brains of
rats — with very precise control over how much drug is given and where
it goes.
In one experiment, they delivered a drug called muscimol to a rat
brain region called the substantia nigra, which is located deep within
the brain and helps to control movement. Previous studies have shown
that muscimol induces symptoms similar to those seen in Parkinson’s
disease. The researchers were able to stimulate the rats to continually
turn in a clockwise direction. They also could also halt the
Parkinsonian behavior by delivering a dose of saline through a different
channel to wash the drug away.
“Since the device can be customizable, in the future we can have
different channels for different chemicals, or for light, to target
tumors or neurological disorders such as Parkinson’s disease or
Alzheimer’s,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences and the lead author of the paper.
This device could also make it easier to deliver potential new
treatments for behavioral neurological disorders such as addiction or
obsessive compulsive disorder. (These may be caused by specific
disruptions in how different parts of the brain communicate with each
other.)
Measuring drug response
The researchers also showed that they could incorporate an electrode
into the tip of the cannula, which can be used to monitor how neurons’
electrical activity changes after drug treatment. They are now working
on adapting the device so it can also be used to measure chemical or
mechanical changes that occur in the brain following drug treatment.
The cannulas can be fabricated in nearly any length or thickness,
making it possible to adapt them for use in brains of different sizes,
including the human brain, the researchers say.
“This study provides proof-of-concept experiments, in large animal
models, that a small, miniaturized device can be safely implanted in the
brain and provide miniaturized control of the electrical activity and
function of single neurons or small groups of neurons. The impact of
this could be significant in focal diseases of the brain, such as
Parkinson’s disease,” says Antonio Chiocca, neurosurgeon-in-chief and
chairman of the Department of Neurosurgery at Brigham and Women’s
Hospital, who was not involved in the research.
The research was funded by the National Institutes of Health and the
National Institute of Biomedical Imaging and Bioengineering.
* To treat brain disorders, drugs (such as l-dopa, a dopamine
precursor used to treat Parkinson’s disease, and Prozac, used to boost
serotonin levels in patients with depression) often interact with brain
chemicals called neurotransmitters (or the cell receptors interact with
neurotransmitters) — creating side effects throughout the brain.
** Michael Cima, the David H. Koch Professor of
Engineering in the Department of Materials Science and Engineering and a
member of MIT’s Koch Institute for Integrative Cancer Research, is also
a senior author of the paper.
*** It would take one billion nanoliter drops to fill 4 cups.
Abstract of Miniaturized neural system for chronic, local intracerebral drug delivery
Recent advances in medications for neurodegenerative disorders are
expanding opportunities for improving the debilitating symptoms suffered
by patients. Existing pharmacologic treatments, however, often rely on
systemic drug administration, which result in broad drug distribution
and consequent increased risk for toxicity. Given that many key neural
circuitries have sub–cubic millimeter volumes and cell-specific
characteristics, small-volume drug administration into affected brain
areas with minimal diffusion and leakage is essential. We report the
development of an implantable, remotely controllable, miniaturized
neural drug delivery system permitting dynamic adjustment of therapy
with pinpoint spatial accuracy. We demonstrate that this device can
chemically modulate local neuronal activity in small (rodent) and large
(nonhuman primate) animal models, while simultaneously allowing the
recording of neural activity to enable feedback control.
Anthropocentrism (/ˌænθroʊpoʊˈsɛntrɪzəm/; from GreekAncient Greek: ἄνθρωπος, ánthrōpos, "human being"; and Ancient Greek: κέντρον, kéntron, "center") is the belief that human beings are the most significant entity of the universe. Anthropocentrism interprets or regards the world in terms of human values and experiences. The term can be used interchangeably with humanocentrism, and some refer to the concept as human supremacy or human exceptionalism.
Anthropocentrism is considered to be profoundly embedded in many modern
human cultures and conscious acts. It is a major concept in the field
of environmental ethics and environmental philosophy, where it is often considered to be the root cause of problems created by human action within the ecosphere.
However, many proponents of anthropocentrism state that this is
not necessarily the case: they argue that a sound long-term view
acknowledges that a healthy, sustainable environment is necessary for
humans and that the real issue is shallow anthropocentrism.
Environmental philosophy
Anthropocentrism, also known as homocentricism or human supremacism,[5] has been posited by some environmentalists, in such books as Confessions of an Eco-Warrior by Dave Foreman and Green Rage
by Christopher Manes, as the underlying (if unstated) reason why
humanity dominates and sees the need to "develop" most of the Earth.
Anthropocentrism is believed by some to be the central problematic
concept in environmental philosophy, where it is used to draw attention
claims of a systematic bias in traditional Western attitudes to the
non-human world.[6]Val Plumwood has argued[7][8] that anthropocentrism plays an analogous role in green theory to androcentrism in feminist theory and ethnocentrism in anti-racist theory. Plumwood calls human-centredness "anthrocentrism" to emphasise this parallel.
One of the first extended philosophical essays addressing environmental ethics, John Passmore's Man's Responsibility for Nature[9] has been criticised by defenders of deep ecology because of its anthropocentrism, often claimed to be constitutive of traditional Western moral thought.[10]
Indeed, defenders of anthropocentrism concerned with the ecological
crisis contend that the maintenance of a healthy, sustainable
environment is necessary for human well-being as opposed to for its own
sake. The problem with a "shallow" viewpoint is not that it is
human-centred but that according to William Grey: "What's wrong with
shallow views is not their concern about the well-being of humans, but
that they do not really consider enough in what that well-being
consists. According to this view, we need to develop an enriched,
fortified anthropocentric notion of human interest to replace the
dominant short-term, sectional and self-regarding conception."[11] In turn, Plumwood in Environmental Culture: The Ecological Crisis of Reason argued that Grey's anthropocentrism is inadequate.[12]
It is important to take note that many devoted environmentalists
encompass a somewhat anthropocentric-based philosophical view supporting
the fact that they will argue in favor of saving the environment for
the sake of human populations. Grey writes: "We should be concerned to
promote a rich, diverse, and vibrant biosphere. Human flourishing may
certainly be included as a legitimate part of such a flourishing."[13]
Such a concern for human flourishing amidst the flourishing of life as
a whole, however, is said to be indistinguishible from that of deep ecology and biocentrism, which has been proposed as both an antithesis of anthropocentrism.[14] and as a generalised form of anthropocentrism.[15]
Judeo-Christian tradition
Maimonides, a scholar of the Torah
who lived in the 12th century AD, was noted for being decidedly
anti-anthropocentric. Maimonides called man "a mere 'drop of the
bucket’" and "not 'the axle of the world'".[16] He also claimed that anthropocentric thinking is what causes humans to think that evil things exist in nature.[17] According to Rabbi Norman Lamm, Maimonides "thus deflate[d] man's extravagant notions of his own importance and urge[d] us to abandon these illusions."[16]
In the 1985 CBC series "A Planet For the Taking", Dr. David Suzuki explored the Old Testament
roots of anthropocentrism and how it shaped our view of non-human
animals. Some Christian proponents of anthropocentrism base their belief
on the Bible, such as the verse 1:26 in the Book of Genesis:
And God said, Let us make man in
our image, after our likeness: and let them have dominion over the fish
of the sea, and over the fowl of the air, and over the cattle, and over
all the earth, and over every creeping thing that creepeth upon the
earth.
The use of the word "dominion" in the Genesis is controversial. Many Biblical scholars, especially Roman Catholic and other non-Protestant Christians, consider this to be a flawed translation of a word meaning "stewardship", which would indicate that mankind should take care of the earth and its various forms of life.
Human rights
Anthropocentrism is the grounding for some naturalistic concepts of human rights. Defenders of anthropocentrism argue that it is the necessary fundamental premise to defend universal human rights, since what matters morally is simply being human. For example, noted philosopher Mortimer J. Adler
wrote, "Those who oppose injurious discrimination on the moral ground
that all human beings, being equal in their humanity, should be treated
equally in all those respects that concern their common humanity, would
have no solid basis in fact to support their normative principle." Adler
is stating here, that denying what is now called human exceptionalism
could lead to tyranny, writing that if we ever came to believe that
humans do not possess a unique moral status, the intellectual foundation
of our liberties collapses: "Why, then, should not groups of superior
men be able to justify their enslavement, exploitation, or even genocide
of inferior human groups on factual and moral grounds akin to those we
now rely on to justify our treatment of the animals we harness as beasts
of burden, that we butcher for food and clothing, or that we destroy as
disease-bearing pests or as dangerous predators?"[18]
Author and anthropocentrism defender Wesley J. Smith from the Discovery Institute
has written that human exceptionalism is what gives rise to human
duties to each other, the natural world, and to treat animals humanely.
Writing in A Rat is a Pig is a Dog is a Boy, a critique of animal rights ideology, "Because we are
unquestionably a unique species—the only species capable of even
contemplating ethical issues and assuming responsibilities—we uniquely
are capable of apprehending the difference between right and wrong, good
and evil, proper and improper conduct toward animals. Or to put it more
succinctly if being human isn't what requires us to treat animals
humanely, what in the world does?"[19]
Cognitive psychology
In cognitive psychology,
anthropocentric thinking can be defined as "the tendency to reason
about unfamiliar biological species or processes by analogy to humans".[20] Reasoning by analogy is an attractive thinking strategy, and it can be tempting to apply our own experience of being human to other biological systems. For example, because death is commonly felt to be undesirable, it may be tempting to form the misconception that death at a cellular level or elsewhere in nature is similarly undesirable (whereas in reality programmed cell death is an essential physiological phenomenon, and ecosystems also rely on death).[20]
Conversely, anthropocentric thinking can also lead people to
underattribute human characteristics to other organisms. For instance,
it may be tempting to wrongly assume that an animal that is very
different from humans, such as an insect, will not share particular
biological characteristics, such as reproduction or blood circulation.[20]
Anthropocentric thinking has predominantly been studied in young children (mostly up to the age of 10) by developmental psychologists interested in its relevance to biology education.
Although relatively little is known about its persistence at a later
age, evidence exists that this pattern of human exceptionalist thinking
can continue through young adulthood, even among students who have been
increasingly educated in biology.[21]
The notion that anthropocentric thinking is an innate
human characteristic has been challenged by study of American children
raised in urban environments, among whom it appears to emerge between
the ages of 3 and 5 years as an acquired perspective.[22]
Children's recourse to anthropocentric thinking seems to vary with
experience and cultural assumptions about the place of humans in the
natural world.[20]
Children raised in rural environments appear to use it less than their
urban counterparts because of their greater familiarity with different
species of animals and plants.[20] Studies involving children from some of the indigenous peoples of the Americas have found little use of anthropocentric thinking.[20][23] Study of children among the Wichí people in South America showed a tendency to think of living organisms in terms of their taxonomic or perceived similarities, ecological considerations, and animistic
traditions, resulting in a much less anthropocentric view of the
natural world than is experienced by many children in Western societies.[23]
In popular culture
In fiction from all eras and societies, there is fiction treating as
normal the actions of humans to ride, eat, milk, and otherwise treat
animals as separate species. There are occasional exceptions, such as talking animals, but they are generally treated as exceptions, as aberrations to the rule distinguishing people from animals.[citation needed]
In science fiction, humanocentrism is the idea that humans, as both beings and as a species, are the superior sentients. Essentially the equivalent of racial supremacy on a galactic scale, it entails intolerant discrimination against sentient non-humans,
much like race supremacists discriminate against those not of their
race. A prime example of this concept is utilized as a story element for
the Mass Effect
series. After humanity's first contact results in a brief war, many
humans in the series develop suspicious or even hostile attitudes
towards the game's various alien races. By the time of the first game,
which takes place several decades after the war, many humans still
retain such sentiments in addition to forming 'pro-human' organizations.
The 2012 documentary The Superior Human? systematically
analyzes anthropocentrism and concludes that value is fundamentally an
opinion, and since life forms naturally value their own traits, most
humans are misled to believe that they are actually more valuable than
other species. This natural bias, according to the film, combined with a
received sense of comfort and an excuse for exploitation of non-humans
cause anthropocentrism to remain in society.
NIST’s
artificial synapse, designed for neuromorphic computing, mimics the
operation of switch between two neurons. One artificial synapse is
located at the center of each X. This chip is 1 square centimeter in
size. (The thick black vertical lines are electrical probes used for
testing.) (credit: NIST)
The NIST switch, described in an open-access paper in Science Advances, provides a missing link for neuromorphic (brain-like) computers, according to the researchers. Such “non-von Neumann
architecture” future computers could significantly speed up analysis
and decision-making for applications such as self-driving cars and
cancer diagnosis.
A synapse is a connection or switch between two neurons, controlling transmission of signals. (credit: NIST)
NIST’s artificial synapse is a metallic cylinder 10 micrometers in
diameter — about 10 times larger than a biological synapse. It simulates
a real synapse by processing incoming electrical spikes (pulsed current
from a neuron) and customizing spiking output signals. The more
firing between cells (or processors), the stronger the connection. That
process enables both biological and artificial synapses to maintain old
circuits and create new ones.
Dramatically faster, lower-energy-required, compared to human synapses
But the NIST synapse has two unique features that the researchers say
are superior to human synapses and to other artificial synapses:
Operating at 100 GHz, it can fire at a rate that is much faster than
the human brain — 1 billion times per second, compared to a brain
cell’s rate of about 50 times per second.
It uses only about one ten-thousandth as much energy as a human
synapse. The spiking energy is less than 1 attojoule** — roughly
equivalent to the miniscule chemical energy bonding two atoms in a
molecule — compared to the roughly 10 femtojoules (10,000 attojoules)
per synaptic event in the human brain. Current neuromorphic platforms
are orders of magnitude less efficient than the human brain. “We don’t
know of any other artificial synapse that uses less energy,” NIST
physicist Mike Schneider said.
Superconducting devices mimicking brain cells and transmission lines
have been developed, but until now, efficient synapses — a crucial piece
— have been missing. The new Josephson junction-based
artificial synapse would be used in neuromorphic computers made of
superconducting components (which can transmit electricity without
resistance), so they would be more efficient than designs based on
semiconductors or software. Data would be transmitted, processed, and
stored in units of magnetic flux.
The brain is especially powerful for tasks like image recognition
because it processes data both in sequence and simultaneously and
it stores memories in synapses all over the system. A conventional
computer processes data only in sequence and stores memory in a separate
unit.
The new NIST artificial synapses combine small size, superfast
spiking signals, and low energy needs, and could be stacked into dense
3D circuits for creating large systems. They could provide a unique
route to a far more complex and energy-efficient neuromorphic system
than has been demonstrated with other technologies, according to the
researchers.
Nature News does raise some concerns
about the research, quoting neuromorphic-technology experts: “Millions
of synapses would be necessary before a system based on the technology
could be used for complex computing; it remains to be seen whether it
will be possible to scale it to this level. … The synapses can only
operate at temperatures close to absolute zero, and need to be cooled
with liquid helium. That this might make the chips impractical for use
in small devices, although a large data centre might be able to maintain
them. … We don’t yet understand enough about the key properties of the
[biological] synapse to know how to use them effectively.”
Inside a superconducting synapse
The NIST synapse is a customized Josephson junction***, long used in NIST voltage standards.
These junctions are a sandwich of superconducting materials with an
insulator as a filling. When an electrical current through the junction
exceeds a level called the critical current, voltage spikes are
produced.
Illustration
showing the basic operation of NIST’s artificial synapse, based on a
Josephson junction. Very weak electrical current pulses are used to
control the number of nanoclusters (green) pointing in the same
direction. Shown here: a “magnetically disordered state” (left) vs.
“magnetically ordered state” (right). (credit: NIST)
Each artificial synapse uses standard niobium electrodes but has a
unique filling made of nanoscale clusters (“nanoclusters”) of manganese
in a silicon matrix. The nanoclusters — about 20,000 per square
micrometer — act like tiny bar magnets with “spins” that can be oriented
either randomly or in a coordinated manner. The number of nanoclusters
pointing in the same direction can be controlled, which affects the
superconducting properties of the junction.
Diagram
of circuit used in the simulation. The blue and red areas represent
pre- and post-synapse neurons, respectively. The X symbol represents the
Josephson junction. (credit: Michael L. Schneider et al./Science
Advances)
The synapse rests in a superconducting state, except when it’s
activated by incoming current and starts producing voltage spikes.
Researchers apply current pulses in a magnetic field to boost the
magnetic ordering — that is, the number of nanoclusters pointing in the
same direction.
This magnetic effect progressively reduces the critical current
level, making it easier to create a normal conductor and produce voltage
spikes. The critical current is the lowest when all the nanoclusters
are aligned. The process is also reversible: Pulses are applied without a
magnetic field to reduce the magnetic ordering and raise the critical
current. This design, in which different inputs alter the spin alignment
and resulting output signals, is similar to how the brain operates.
Synapse behavior can also be tuned by changing how the device is made
and its operating temperature. By making the nanoclusters smaller,
researchers can reduce the pulse energy needed to raise or lower the
magnetic order of the device. Raising the operating temperature slightly
from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15
degrees C (minus 452.47 degrees F), for example, results in more and
higher voltage spikes.
* Future exascale supercomputers would run at 1018
exaflops (“flops” = floating point operations per second) or more. The
current fastest supercomputer — the Sunway TaihuLight — operates at
about 0.1 exaflops; zettascale computers, the next step beyond exascale,
would run 10,000 times faster than that.
** An attojoule is 10-18joule, a unit of energy, and is one-thousandth of a femtojoule.
*** The Josephson effect is the phenomenon of supercurrent
— i.e., a current that flows indefinitely long without any voltage
applied — across a device known as a Josephson junction, which consists
of two superconductors coupled by a weak link. — Wikipedia
Abstract of Ultralow power artificial synapses using nanotextured magnetic Josephson junctions
Neuromorphic computing promises to markedly improve the efficiency of
certain computational tasks, such as perception and decision-making.
Although software and specialized hardware implementations of neural
networks have made tremendous accomplishments, both implementations are
still many orders of magnitude less energy efficient than the human
brain. We demonstrate a new form of artificial synapse based on
dynamically reconfigurable superconducting Josephson junctions with
magnetic nanoclusters in the barrier. The spiking energy per pulse
varies with the magnetic configuration, but in our demonstration
devices, the spiking energy is always less than 1 aJ. This compares very
favorably with the roughly 10 fJ per synaptic event in the human brain.
Each artificial synapse is composed of a Si barrier containing Mn
nanoclusters with superconducting Nb electrodes. The critical current of
each synapse junction, which is analogous to the synaptic weight, can
be tuned using input voltage spikes that change the spin alignment of Mn
nanoclusters. We demonstrate synaptic weight training with electrical
pulses as small as 3 aJ. Further, the Josephson plasma frequencies of
the devices, which determine the dynamical time scales, all exceed 100
GHz. These new artificial synapses provide a significant step toward a
neuromorphic platform that is faster, more energy-efficient, and thus
can attain far greater complexity than has been demonstrated with other
technologies.
Humanism is a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence (rationalism and empiricism) over acceptance of dogma or superstition. The meaning of the term humanism has fluctuated according to the successive intellectual movements which have identified with it. The term was coined by theologian Friedrich Niethammer at the beginning of the 19th century
to refer to a system of education based on the study of classical
literature ("classical humanism"). Generally, however, humanism refers
to a perspective that affirms some notion of human freedom
and progress. It views humans as solely responsible for the promotion
and development of individuals and emphasizes a concern for man in
relation to the world.
The word "humanism" is ultimately derived from the Latin concept humanitas.
It entered English in the nineteenth century. However, historians
agree that the concept predates the label invented to describe it,
encompassing the various meanings ascribed to humanitas, which included both benevolence toward one's fellow humans and the values imparted by bonae litterae or humane learning (literally "good letters").
In the second century AD, a Latin grammarian, Aulus Gellius (c.125 – c.180), complained:
Those who have spoken Latin and have used the language correctly do not give to the word humanitas the meaning which it is commonly thought to have, namely, what the Greeks call φιλανθρωπία (philanthropy), signifying a kind of friendly spirit and good-feeling towards all men without distinction; but they gave to humanitas the force of the Greek παιδεία (paideia); that is, what we call eruditionem institutionemque in bonas artes, or "education and training in the liberal arts".
Those who earnestly desire and seek after these are most highly
humanized. For the desire to pursue of that kind of knowledge, and the
training given by it, has been granted to humanity alone of all the
animals, and for that reason it is termed humanitas, or "humanity".[5]
Gellius says that in his day humanitas is commonly used as a synonym for philanthropy – or
kindness and benevolence toward one's fellow human beings. Gellius
maintains that this common usage is wrong, and that model writers of
Latin, such as Cicero and others, used the word only to mean what we
might call "humane" or "polite" learning, or the Greek equivalent Paideia. Yet in seeking to restrict the meaning of humanitas
to literary education this way, Gellius was not advocating a retreat
from political engagement into some ivory tower, though it might look
like that to us. He himself was involved in public affairs. According to
legal historian Richard Bauman, Gellius was a judge as well as a
grammarian and was an active participant the great contemporary debate
on harsh punishments that accompanied the legal reforms of Antoninus Pius
(one these reforms, for example, was that a prisoner was not to be
treated as guilty before being tried). "By assigning pride of place to
Paideia in his comment on the etymology of humanitas, Gellius implies that the trained mind is best equipped to handle the problems troubling society."[6]
Gellius's writings fell into obscurity during the Middle Ages,
but during the Italian Renaissance, Gellius became a favorite author.
Teachers and scholars of Greek and Latin grammar, rhetoric, philosophy,
and poetry were called and called themselves "humanists".[7][8] Modern scholars, however, point out that Cicero (106 – 43BCE), who was most responsible for defining and popularizing the term humanitas,
in fact frequently used the word in both senses, as did his near
contemporaries. For Cicero, a lawyer, what most distinguished humans
from brutes was speech, which, allied to reason, could (and should)
enable them to settle disputes and live together in concord and harmony
under the rule of law.[9] Thus humanitas included two meanings from the outset and these continue in the modern derivative, humanism,
which even today can refer to both humanitarian benevolence and to a
method of study and debate involving an accepted group of authors and a
careful and accurate use of language.[10]
During the French Revolution, and soon after, in Germany (by the Left Hegelians), humanism began to refer to an ethical philosophy centered on humankind, without attention to the transcendent or supernatural. The designation Religious Humanism refers to organized groups that sprang up during the late-nineteenth and early twentieth centuries. It is similar to Protestantism, although centered on human needs, interests, and abilities rather than the supernatural.[11] In the Anglophone world, such modern, organized forms of humanism, which are rooted in the 18th-century Enlightenment, have to a considerable extent more or less detached themselves from the historic connection of humanism with classical learning and the liberal arts.
An ideal society as conceived by Renaissance humanist Saint Thomas More in his book Utopia
In 1808 Bavarian educational commissioner Friedrich Immanuel Niethammer coined the term Humanismus to describe the new classical curriculum he planned to offer in German secondary schools,[15]
and by 1836 the word "humanism" had been absorbed into the English
language in this sense. The coinage gained universal acceptance in 1856,
when German historian and philologist Georg Voigt used humanism to describe Renaissance humanism, the movement that flourished in the Italian Renaissance to revive classical learning, a use which won wide acceptance among historians in many nations, especially Italy.[16]
But in the mid-18th century, during the French Enlightenment, a
more ideological use of the term had come into use. In 1765, the author
of an anonymous article in a French Enlightenment periodical spoke of
"The general love of humanity ... a virtue hitherto quite nameless among
us, and which we will venture to call 'humanism', for the time has come
to create a word for such a beautiful and necessary thing".[17]
The latter part of the 18th and the early 19th centuries saw the
creation of numerous grass-roots "philanthropic" and benevolent
societies dedicated to human betterment and the spreading of knowledge
(some Christian, some not). After the French Revolution,
the idea that human virtue could be created by human reason alone
independently from traditional religious institutions, attributed by
opponents of the Revolution to Enlightenmentphilosophes such as Rousseau, was violently attacked by influential religious and political conservatives, such as Edmund Burke and Joseph de Maistre, as a deification or idolatry of humanity.[18] Humanism began to acquire a negative sense. The Oxford English Dictionary
records the use of the word "humanism" by an English clergyman in 1812
to indicate those who believe in the "mere humanity" (as opposed to the
divine nature) of Christ, i.e., Unitarians and Deists.[19] In this polarised atmosphere, in which established ecclesiastical bodies tended to circle the wagons
and reflexively oppose political and social reforms like extending the
franchise, universal schooling, and the like, liberal reformers and
radicals embraced the idea of Humanism as an alternative religion of
humanity. The anarchist Proudhon (best known for declaring that "property is theft") used the word "humanism" to describe a "culte, déification de l’humanité" ("worship, deification of humanity") and Ernest Renan in L’avenir de la science: pensées de 1848 ("The Future of Knowledge: Thoughts on 1848") (1848–49), states: "It is my deep conviction that pure humanism
will be the religion of the future, that is, the cult of all that
pertains to humanity—all of life, sanctified and raised to the level of a
moral value."[20]
At about the same time, the word "humanism" as a philosophy
centred on humankind (as opposed to institutionalised religion) was also
being used in Germany by the Left Hegelians, Arnold Ruge, and Karl Marx,
who were critical of the close involvement of the church in the German
government. There has been a persistent confusion between the several
uses of the terms:[1]
philanthropic humanists look to what they consider their antecedents in
critical thinking and human-centered philosophy among the Greek
philosophers and the great figures of Renaissance history; and scholarly
humanists stress the linguistic and cultural disciplines needed to
understand and interpret these philosophers and artists.
Predecessors
Ancient South Asia
Human-centered philosophy that rejected the supernatural may also be found circa 1500 BCE in the Lokayata system of Indian philosophy. Nasadiya Sukta, a passage in the Rig Veda, contains one of the first recorded assertions of agnosticism.
In the 6th-century BCE, Gautama Buddha expressed, in Pali literature a skeptical attitude toward the supernatural:[21]
Since neither soul, nor aught belonging to soul, can
really and truly exist, the view which holds that this I who am 'world',
who am 'soul', shall hereafter live permanent, persisting, unchanging,
yea abide eternally: is not this utterly and entirely a foolish
doctrine?
Another instance of ancient humanism as an organised system of thought is found in the Gathas of Zarathustra, composed between 1,000BCE – 600BCE[22] in Greater Iran.
Zarathustra's philosophy in the Gathas lays out a conception of
humankind as thinking beings, dignified with choice and agency according
to the intellect which each receives from Ahura Mazda (God in the form of supreme wisdom). The idea of Ahura Mazda as a non-intervening deistic god or Great Architect of the Universe
was combined with a unique eschatology and ethical system which implied
that each person is held morally responsible in the afterlife, for
their choices they freely made in life.[23]
This importance placed upon thought, action and personal
responsibility, and the concept of a non-intervening creator, was a
source of inspiration to a number of Enlightenment humanist thinkers in Europe such as Voltaire and Montesquieu.
Ancient Greece
6th-century BCEpre-Socratic Greek philosophers Thales of Miletus and Xenophanes of Colophon
were the first in the region to attempt to explain the world in terms
of human reason rather than myth and tradition, thus can be said to be
the first Greek humanists. Thales questioned the notion of
anthropomorphic gods and Xenophanes refused to recognise the gods of his
time and reserved the divine for the principle of unity in the
universe. These Ionian Greeks were the first thinkers to assert that
nature is available to be studied separately from the supernatural
realm. Anaxagoras brought philosophy and the spirit of rational inquiry from Ionia to Athens. Pericles,
the leader of Athens during the period of its greatest glory was an
admirer of Anaxagoras. Other influential pre-Socratics or rational
philosophers include Protagoras (like Anaxagoras a friend of Pericles), known for his famous dictum "man is the measure of all things" and Democritus,
who proposed that matter was composed of atoms. Little of the written
work of these early philosophers survives and they are known mainly from
fragments and quotations in other writers, principally Plato and Aristotle. The historian Thucydides, noted for his scientific and rational approach to history, is also much admired by later humanists.[24] In the 3rd century BCE, Epicurus became known for his concise phrasing of the problem of evil, lack of belief in the afterlife, and human-centred approaches to achieving eudaimonia. He was also the first Greek philosopher to admit women to his school as a rule.
Medieval Islam
Many medieval Muslim thinkers pursued humanistic, rational and scientific discourses in their search for knowledge, meaning and values. A wide range of Islamic writings on love, poetry, history and philosophical theology show that medieval Islamic thought was open to the humanistic ideas of individualism, occasional secularism, skepticism, and liberalism.[25]
According to Imad-ad-Dean Ahmad, another reason the Islamic world flourished during the Middle Ages was an early emphasis on freedom of speech, as summarised by al-Hashimi (a cousin of Caliph al-Ma'mun) in the following letter to one of the religious opponents he was attempting to convert through reason:[26]
Bring forward all the arguments you
wish and say whatever you please and speak your mind freely. Now that
you are safe and free to say whatever you please appoint some arbitrator
who will impartially judge between us and lean only towards the truth
and be free from the empery of passion, and that arbitrator shall be Reason,
whereby God makes us responsible for our own rewards and punishments.
Herein I have dealt justly with you and have given you full security and
am ready to accept whatever decision Reason may give for me or against
me. For "There is no compulsion in religion" (Qur'an 2:256)
and I have only invited you to accept our faith willingly and of your
own accord and have pointed out the hideousness of your present belief.
Peace be with you and the blessings of God!
Renaissance humanism was an intellectual movement in Europe of the later Middle Ages and the Early Modern period. The 19th-century German historian Georg Voigt (1827–91) identified Petrarch as the first Renaissance humanist. Paul Johnson
agrees that Petrarch was "the first to put into words the notion that
the centuries between the fall of Rome and the present had been the age
of Darkness". According to Petrarch, what was needed to remedy this
situation was the careful study and imitation of the great classical
authors. For Petrarch and Boccaccio, the greatest master was Cicero, whose prose became the model for both learned (Latin) and vernacular (Italian) prose.
Once the language was mastered grammatically it could be
used to attain the second stage, eloquence or rhetoric. This art of
persuasion [Cicero had held] was not art for its own sake, but the
acquisition of the capacity to persuade others – all men and women – to
lead the good life. As Petrarch put it, 'it is better to will the good
than to know the truth'. Rhetoric thus led to and embraced philosophy. Leonardo Bruni (c.1369–1444),
the outstanding scholar of the new generation, insisted that it was
Petrarch who "opened the way for us to show how to acquire learning",
but it was in Bruni's time that the word umanista first came into use, and its subjects of study were listed as five: grammar, rhetoric, poetry, moral philosophy, and history".[28]
Coluccio Salutati, Chancellor of Florence and disciple of Petrarch (1331–1406)
The basic training of the humanist was to speak well and write
(typically, in the form of a letter). One of Petrarch's followers, Coluccio Salutati (1331–1406) was made chancellor of Florence,
"whose interests he defended with his literary skill. The Visconti of
Milan claimed that Salutati’s pen had done more damage than 'thirty
squadrons of Florentine cavalry'".[29]
Poggio
Bracciolini (1380–1459), an early Renaissance humanist, book collector,
and reformer of script, who served as papal secretary[30]
Contrary to a still widely held interpretation that originated in Voigt's celebrated contemporary, Jacob Burckhardt,[31] and which was adopted wholeheartedly – especially by modern thinkers calling themselves "humanists" – [32]
most specialists today do not characterise Renaissance humanism as a
philosophical movement, nor in any way as anti-Christian or even
anti-clerical. A modern historian has this to say:
Humanism was not an ideological
programme but a body of literary knowledge and linguistic skill based on
the "revival of good letters", which was a revival of a late-antique
philology and grammar, This is how the word "humanist" was understood by
contemporaries, and if scholars would agree to accept the word in this
sense rather than in the sense in which it was used in the nineteenth
century we might be spared a good deal of useless argument. That
humanism had profound social and even political consequences of the life
of Italian courts is not to be doubted. But the idea that as a movement
it was in some way inimical to the Church, or to the conservative
social order in general is one that has been put forward for a century
and more without any substantial proof being offered.
The nineteenth-century historian Jacob Burckhardt, in his classic work, The Civilization of the Renaissance in Italy,
noted as a "curious fact" that some men of the new culture were "men of
the strictest piety, or even ascetics". If he had meditated more deeply
on the meaning of the careers of such humanists as Abrogio Traversari
(1386–1439), the General of the Camaldolese Order, perhaps he would not
have gone on to describe humanism in unqualified terms as "pagan", and
thus helped precipitate a century of infertile debate about the possible
existence of something called "Christian humanism" which ought to be
opposed to "pagan humanism".
— Peter Partner, Renaissance Rome, Portrait of a Society 1500–1559 (University of California Press 1979) pp. 14–15.
The umanisti criticised what they considered the barbarous
Latin of the universities, but the revival of the humanities largely did
not conflict with the teaching of traditional university subjects,
which went on as before.[33]
Nor did the humanists view themselves as in conflict with
Christianity. Some, like Salutati, were the Chancellors of Italian
cities, but the majority (including Petrarch) were ordained as priests,
and many worked as senior officials of the Papal court. Humanist
Renaissance popes Nicholas V, Pius II, Sixtus IV, and Leo X wrote books and amassed huge libraries.[34]
In the High Renaissance,
in fact, there was a hope that more direct knowledge of the wisdom of
antiquity, including the writings of the Church fathers, the earliest
known Greek texts of the Christian Gospels, and in some cases even the
Jewish Kabbalah, would initiate a harmonious new era of universal agreement.[35]
With this end in view, Renaissance Church authorities afforded
humanists what in retrospect appears a remarkable degree of freedom of
thought.[36][37] One humanist, the Greek Orthodox Platonist Gemistus Pletho (1355–1452), based in Mystras, Greece (but in contact with humanists in Florence, Venice, and Rome) taught a Christianised version of pagan polytheism.[38]
The humanists' close study of Latin
literary texts soon enabled them to discern historical differences in
the writing styles of different periods. By analogy with what they saw
as decline of Latin, they applied the principle of ad fontes, or back to the sources, across broad areas of learning, seeking out manuscripts of Patristic literature as well as pagan authors. In 1439, while employed in Naples at the court of Alfonso V of Aragon (at the time engaged in a dispute with the Papal States) the humanist Lorenzo Valla used stylistic textual analysis, now called philology, to prove that the Donation of Constantine, which purported to confer temporal powers on the Pope of Rome, was an 8th-century forgery.[39]
For the next 70 years, however, neither Valla nor any of his
contemporaries thought to apply the techniques of philology to other
controversial manuscripts in this way. Instead, after the fall of the Byzantine Empire
to the Turks in 1453, which brought a flood of Greek Orthodox refugees
to Italy, humanist scholars increasingly turned to the study of Neoplatonism and Hermeticism,
hoping to bridge the differences between the Greek and Roman Churches,
and even between Christianity itself and the non-Christian world.[40]
The refugees brought with them Greek manuscripts, not only of Plato and
Aristotle, but also of the Christian Gospels, previously unavailable in
the Latin West.
After 1517, when the new invention of printing made these texts widely available, the Dutch humanist Erasmus, who had studied Greek at the Venetian printing house of Aldus Manutius,
began a philological analysis of the Gospels in the spirit of Valla,
comparing the Greek originals with their Latin translations with a view
to correcting errors and discrepancies in the latter. Erasmus, along
with the French humanist Jacques Lefèvre d'Étaples,
began issuing new translations, laying the groundwork for the
Protestant Reformation. Henceforth Renaissance humanism, particularly in
the German North, became concerned with religion, while Italian and
French humanism concentrated increasingly on scholarship and philology
addressed to a narrow audience of specialists, studiously avoiding
topics that might offend despotic rulers or which might be seen as
corrosive of faith. After the Reformation, critical examination of the
Bible did not resume until the advent of the so-called Higher criticism of the 19th-century German Tübingen school.
Consequences
The ad fontes principle also had many applications. The
re-discovery of ancient manuscripts brought a more profound and accurate
knowledge of ancient philosophical schools such as Epicureanism, and Neoplatonism,
whose Pagan wisdom the humanists, like the Church fathers of old,
tended, at least initially, to consider as deriving from divine
revelation and thus adaptable to a life of Christian virtue.[41] The line from a drama of Terence, Homo sum, humani nihil a me alienum puto (or with nil for nihil), meaning "I am a human being, I think nothing human alien to me",[42]
known since antiquity through the endorsement of Saint Augustine,
gained renewed currency as epitomising the humanist attitude. The
statement, in a play modeled or borrowed from a (now lost) Greek comedy
by Menander, may have originated in a lighthearted vein – as a comic
rationale for an old man's meddling – but it quickly became a proverb
and throughout the ages was quoted with a deeper meaning, by Cicero and
Saint Augustine, to name a few, and most notably by Seneca. Richard Bauman writes:
Homo sum: humani nihil a me alienum puto., I am a human being: and I deem nothing pertaining to humanity is foreign to me.
The words of the comic playwright P.Terentius
Afer reverberated across the Roman world of the mid-2nd century BCE and
beyond. Terence, an African and a former slave, was well placed to
preach the message of universalism, of the essential unity of the human
race, that had come down in philosophical form from the Greeks, but
needed the pragmatic muscles of Rome in order to become a practical
reality. The influence of Terence's felicitous phrase on Roman thinking
about human rights can hardly be overestimated. Two hundred years later
Seneca ended his seminal exposition of the unity of humankind with a
clarion-call:
There is one short rule that should regulate human
relationships. All that you see, both divine and human, is one. We are
parts of the same great body. Nature created us from the same source and
to the same end. She imbued us with mutual affection and sociability,
she taught us to be fair and just, to suffer injury rather than to
inflict it. She bid us extend our hands to all in need of help. Let that
well-known line be in our heart and on our lips: Homo sum, humani nihil a me alienum puto." [43]
Better acquaintance with Greek and Roman technical writings also influenced the development of European science (see the history of science in the Renaissance). This was despite what A. C. Crombie
(viewing the Renaissance in the 19th-century manner as a chapter in the
heroic March of Progress) calls "a backwards-looking admiration for
antiquity", in which Platonism stood in opposition to the Aristotelian concentration on the observable properties of the physical world.[44] But Renaissance humanists, who considered themselves as restoring the
glory and nobility of antiquity, had no interest in scientific
innovation. However, by the mid-to-late 16th century, even the
universities, though still dominated by Scholasticism, began to demand
that Aristotle be read in accurate texts edited according to the
principles of Renaissance philology, thus setting the stage for
Galileo's quarrels with the outmoded habits of Scholasticism.
Just as artist and inventor Leonardo da Vinci – partaking of the zeitgeist though not himself a humanist – advocated study of human anatomy, nature, and weather to enrich Renaissance works of art, so Spanish-born humanist Juan Luis Vives
(c. 1493–1540) advocated observation, craft, and practical techniques
to improve the formal teaching of Aristotelian philosophy at the
universities, helping to free them from the grip of Medieval
Scholasticism.[45] Thus, the stage was set for the adoption of an approach to natural philosophy, based on empirical
observations and experimentation of the physical universe, making
possible the advent of the age of scientific inquiry that followed the
Renaissance.[46]
It was in education that the humanists' program had the most lasting results, their curriculum and methods:
were followed everywhere, serving as models for the
Protestant Reformers as well as the Jesuits. The humanistic school,
animated by the idea that the study of classical languages and
literature provided valuable information and intellectual discipline as
well as moral standards and a civilised taste for future rulers,
leaders, and professionals of its society, flourished without
interruption, through many significant changes, until our own century,
surviving many religious, political and social revolutions. It has but
recently been replaced, though not yet completely, by other more
practical and less demanding forms of education.[47]
From Renaissance to modern humanism
Early humanists saw no conflict between reason and their Christian faith (see Christian Humanism).
They inveighed against the abuses of the Church, but not against the
Church itself, much less against religion. For them, the word "secular"
carried no connotations of disbelief – that would come later, in the
nineteenth century. In the Renaissance to be secular meant simply to be
in the world rather than in a monastery. Petrarch frequently admitted
that his brother Gherardo's life as a Carthusian monk was superior to
his own (although Petrarch himself was in Minor Orders
and was employed by the Church all his life). He hoped that he could do
some good by winning earthly glory and praising virtue, inferior though
that might be to a life devoted solely to prayer. By embracing a
non-theistic philosophic base,[48]
however, the methods of the humanists, combined with their eloquence,
would ultimately have a corrosive effect on established authority.
Yet it was from the Renaissance that modern Secular Humanism grew,
with the development of an important split between reason and religion.
This occurred as the church's complacent authority was exposed in two
vital areas. In science, Galileo's support of the Copernican revolution
upset the church's adherence to the theories of Aristotle, exposing them
as false. In theology, the Dutch scholar Erasmus with his new Greek
text showed that the Roman Catholic adherence to Jerome's Vulgate was
frequently in error. A tiny wedge was thus forced between reason and
authority, as both of them were then understood.[49]
For some, this meant turning back to the Bible as the source of
authority instead of the Catholic Church, for others it was a split from
theism altogether. This was the main divisive line between the
Reformation and the Renaissance,[50]
which dealt with the same basic problems, supported the same science
based on reason and empirical research, but had a different set of
presuppositions (theistic versus naturalistic).[48]
19th and 20th centuries
The phrase the "religion of humanity" is sometimes attributed to American Founding FatherThomas Paine, though as yet unattested in his surviving writings. According to Tony Davies:
Paine called himself a theophilanthropist,
a word combining the Greek for "God", "love", and "humanity", and
indicating that while he believed in the existence of a creating
intelligence in the universe, he entirely rejected the claims made by
and for all existing religious doctrines, especially their miraculous,
transcendental and salvationist pretensions. The Parisian "Society of
Theophilanthropy" which he sponsored, is described by his biographer as
"a forerunner of the ethical and humanist societies that proliferated
later" ... [Paine's book] the trenchantly witty Age of Reason
(1793) ... pours scorn on the supernatural pretensions of scripture,
combining Voltairean mockery with Paine's own style of taproom ridicule
to expose the absurdity of a theology built on a collection of
incoherent Levantine folktales.[51]
Davies identifies Paine's The Age of Reason as "the link between the two major narratives of what Jean-François Lyotard[52] calls the narrative of legitimation": the rationalism of the 18th-century Philosophes and the radical, historically based German 19th-century Biblical criticism of the HegeliansDavid Friedrich Strauss and Ludwig Feuerbach.
"The first is political, largely French in inspiration, and projects
'humanity as the hero of liberty'. The second is philosophical, German,
seeks the totality and autonomy of knowledge, and stresses understanding
rather than freedom as the key to human fulfilment and emancipation.
The two themes converged and competed in complex ways in the 19th
century and beyond, and between them set the boundaries of its various
humanisms.[53]Homo homini deus est ("The human being is a god to humanity" or "god is nothing [other than] the human being to himself"), Feuerbach had written.[54]
Victorian novelist Mary Ann Evans, known to the world as George Eliot, translated Strauss's Das Leben Jesu ("The Life of Jesus", 1846) and Ludwig Feuerbach's Das Wesen Christianismus ("The Essence of Christianity"). She wrote to a friend:
the fellowship between man and man which has been the
principle of development, social and moral, is not dependent on
conceptions of what is not man ... the idea of God, so far as it has
been a high spiritual influence, is the ideal of goodness entirely human
(i.e., an exaltation of the human).[55]
Eliot and her circle, who included her companion George Henry Lewes (the biographer of Goethe) and the abolitionist and social theorist Harriet Martineau, were much influenced by the positivism of Auguste Comte, whom Martineau had translated. Comte had proposed an atheistic culte founded on human principles – a secular Religion of Humanity
(which worshiped the dead, since most humans who have ever lived are
dead), complete with holidays and liturgy, modeled on the rituals of
what was seen as a discredited and dilapidated Catholicism.[56]
Although Comte's English followers, like Eliot and Martineau, for the
most part rejected the full gloomy panoply of his system, they liked the
idea of a religion of humanity. Comte's austere vision of the universe,
his injunction to "vivre pour altrui" ("live for others", from which comes the word "altruism"),[57] and his idealisation of women inform the works of Victorian novelists and poets from George Eliot and Matthew Arnold to Thomas Hardy.
The British Humanistic Religious Association was formed as one of
the earliest forerunners of contemporary chartered Humanist
organisations in 1853 in London. This early group was democratically
organised, with male and female members participating in the election of
the leadership, and promoted knowledge of the sciences, philosophy, and
the arts.[58]
In February 1877, the word was used pejoratively, apparently for the first time in America, to describe Felix Adler. Adler, however, did not embrace the term, and instead coined the name "Ethical Culture" for his new movement – a movement which still exists in the now Humanist-affiliated New York Society for Ethical Culture.[59]
In 2008, Ethical Culture Leaders wrote: "Today, the historic
identification, Ethical Culture, and the modern description, Ethical
Humanism, are used interchangeably."[60]
Active in the early 1920s, F.C.S. Schiller labelled his work "humanism" but for Schiller the term referred to the pragmatist philosophy he shared with William James. In 1929, Charles Francis Potter founded the First Humanist Society of New York whose advisory board included Julian Huxley, John Dewey, Albert Einstein and Thomas Mann. Potter was a minister from the Unitarian tradition and in 1930 he and his wife, Clara Cook Potter, published Humanism: A New Religion. Throughout the 1930s, Potter was an advocate of such liberal causes as, women’s rights, access to birth control, "civil divorce laws", and an end to capital punishment.[61]
Raymond B. Bragg, the associate editor of The New Humanist, sought to consolidate the input of Leon Milton Birkhead, Charles Francis Potter, and several members of the Western Unitarian Conference. Bragg asked Roy Wood Sellars to draft a document based on this information which resulted in the publication of the Humanist Manifesto
in 1933. Potter's book and the Manifesto became the cornerstones of
modern humanism, the latter declaring a new religion by saying, "any
religion that can hope to be a synthesising and dynamic force for today
must be shaped for the needs of this age. To establish such a religion
is a major necessity of the present." It then presented 15 theses of
humanism as foundational principles for this new religion.
In 1941, the American Humanist Association was organised. Noted members of The AHA included Isaac Asimov, who was the president from 1985 until his death in 1992, and writer Kurt Vonnegut, who followed as honorary president until his death in 2007. Gore Vidal became honorary president in 2009. Robert Buckman was the head of the association in Canada, and is now an honorary president.[citation needed]
In 2004, American Humanist Association, along with other groups representing agnostics, atheists, and other freethinkers, joined to create the Secular Coalition for America which advocates in Washington, D.C., for separation of church and state and nationally for the greater acceptance of nontheistic Americans. The Executive Director of Secular Coalition for America is Sean Faircloth, a long-time state legislator from Maine.
Types
Scholarly tradition
Renaissance humanists
"Renaissance humanism" is the name later given to a tradition of
cultural and educational reform engaged in by civic and ecclesiastical
chancellors, book collectors, educators, and writers, who by the late
fifteenth century began to be referred to as umanisti – "humanists".[7]
It developed during the fourteenth and the beginning of the fifteenth
centuries, and was a response to the challenge of scholastic university
education, which was then dominated by Aristotelian philosophy and
logic. Scholasticism
focused on preparing men to be doctors, lawyers or professional
theologians, and was taught from approved textbooks in logic, natural
philosophy, medicine, law and theology.[63] There were important centres of humanism at Florence, Naples, Rome, Venice, Mantua, Ferrara, and Urbino.
Humanists reacted against this utilitarian approach and the
narrow pedantry associated with it. They sought to create a citizenry
(frequently including women) able to speak and write with eloquence and
clarity and thus capable of engaging the civic life of their communities
and persuading others to virtuous and prudent actions. This was to be
accomplished through the study of the studia humanitatis, today known as the humanities: grammar, rhetoric, history, poetry and moral philosophy.[64]
As a program to revive the cultural – and particularly the
literary – legacy and moral philosophy of classical antiquity, Humanism
was a pervasive cultural mode and not the program of a few isolated
geniuses like Rabelais or Erasmus as is still sometimes popularly believed.[65]
Humanism is a democratic and ethical life stance, which affirms that human beings have the right
and responsibility to give meaning and shape to their own lives. It
stands for the building of a more humane society through an ethic based
on human and other natural values in the spirit of reason and free
inquiry through human capabilities. It is not theistic, and it does not accept supernatural views of reality.
Religious humanists
"Religious humanists" are non-superstitious people who nevertheless
see ethical humanism as their religion, and who seek to integrate
(secular) humanist ethical philosophy with congregational rituals
centred on human needs, interests, and abilities. Though practitioners
of religious humanism did not officially organise under the name of
"humanism" until the late 19th and early 20th centuries, non-theistic
religions paired with human-centred ethical philosophy have a long
history. A unified Ethical Culture movement was first founded in 1876; its founder, Felix Adler was a former member of the Free Religious Association,
and conceived of Ethical Culture as a new religion that would retain
the ethical message at the heart of all religions. Ethical Culture was
religious in the sense of playing a defining role in people's lives and
addressing issues of ultimate concern. Nowadays religious humanists in
the United States are represented by organisations such as the American Ethical Union,
and will simply describe themselves as "ethical humanists" or
"humanists". Secular humanists and religious humanists organise together
as part of larger national and international groupings, and
differentiate themselves primarily in their attitude to the promotion of
humanist thinking.
Earlier attempts at inventing a secular religious tradition informed the Ethical Culture movement. The Cult of Reason (French: Culte de la Raison) was a religion based on deism devised during the French Revolution by Jacques Hébert, Pierre Gaspard Chaumette and their supporters.[70] In 1793 during the French Revolution, the cathedral Notre Dame de Paris was turned into a "Temple of Reason" and for a time Lady Liberty replaced the Virgin Mary on several altars.[71] In the 1850s, Auguste Comte, the Father of Sociology, founded Positivism, a "religion of humanity".[72]
One of the earliest forerunners of contemporary chartered humanist
organisations was the Humanistic Religious Association formed in 1853 in
London. This early group was democratically organised, with male and
female members participating in the election of the leadership and
promoted knowledge of the sciences, philosophy, and the arts.[72]
The distinction between so-called "ethical" humanists and
"secular" humanists is most pronounced in the United States, although it
is becoming less so over time. The philosophical distinction is not
reflected at all in Canada, Latin America, Africa, or Asia, or most of
Europe. In the UK, where the humanist movement was strongly influenced
by Americans in the 19th century, the leading "ethical societies" and
"ethical churches" evolved into secular humanist charities (e.g. the
British Ethical Union became the British Humanist Association and later Humanists UK). In Scandinavian countries, "Human-etik" or "humanetikk"
(roughly synonymous with ethical humanism) is a popular strand within
humanism, originating from the works of Danish philosopher Harald Høffding. The Norwegian Humanist Association belongs to this tendency, known as Human-Etisk Forbund (literally "Human-Ethical League"). Over time, the emphasis on human-etisk
has become less pronounced, and today HEF promotes both "humanisme" and
"human-etisk". In Sweden, the main Swedish humanist group Humanisterna
("Humanists") began as a "human-ethical association", like the
Norwegian humanists, before adopting the more prevalent secular humanist
model popular in most of Europe. Today the distinction in Europe is
mostly superficial.
Criticism
Polemics about humanism have sometimes assumed paradoxical twists and turns. Early 20th century critics such as Ezra Pound, T. E. Hulme, and T. S. Eliot considered humanism to be sentimental "slop" (Hulme)[citation needed] or "an old bitch gone in the teeth" (Pound).[73]Postmodern critics who are self-described anti-humanists, such as Jean-François Lyotard and Michel Foucault, have asserted that humanism posits an overarching and excessively abstract notion of humanity or universal human nature,
which can then be used as a pretext for imperialism and domination of
those deemed somehow less than human. "Humanism fabricates the human as
much as it fabricates the nonhuman animal", suggests Timothy Laurie,
turning the human into what he calls "a placeholder for a range of
attributes that have been considered most virtuous among humans (e.g.
rationality, altruism), rather than most commonplace (e.g. hunger,
anger)".[74] Nevertheless, philosopher Kate Soper[75]
notes that by faulting humanism for falling short of its own benevolent
ideals, anti-humanism thus frequently "secretes a humanist rhetoric".[76]
In his book, Humanism (1997), Tony Davies calls these critics "humanist anti-humanists". Critics of antihumanism, most notably Jürgen Habermas,
counter that while antihumanists may highlight humanism's failure to
fulfil its emancipatory ideal, they do not offer an alternative
emancipatory project of their own.[77] Others, like the German philosopher Heidegger
considered themselves humanists on the model of the ancient Greeks, but
thought humanism applied only to the German "race" and specifically to
the Nazis and thus, in Davies' words, were anti-humanist humanists.[78]
Such a reading of Heidegger's thought is itself deeply controversial;
Heidegger includes his own views and critique of Humanism in Letter On
Humanism. Davies acknowledges that after the horrific experiences of the
wars of the 20th century "it should no longer be possible to formulate
phrases like 'the destiny of man' or the 'triumph of human reason'
without an instant consciousness of the folly and brutality they drag
behind them". For "it is almost impossible to think of a crime that has
not been committed in the name of human reason". Yet, he continues, "it
would be unwise to simply abandon the ground occupied by the historical
humanisms. For one thing humanism remains on many occasions the only
available alternative to bigotry and persecution. The freedom to speak
and write, to organise and campaign in defence of individual or
collective interests, to protest and disobey: all these can only be
articulated in humanist terms."[79]
Modern humanists, such as Corliss Lamont or Carl Sagan, hold that humanity must seek for truth through reason and the best observable evidence and endorse scientific skepticism and the scientific method.
However, they stipulate that decisions about right and wrong must be
based on the individual and common good, with no consideration given to
metaphysical or supernatural beings. The idea is to engage with what is
human.[80]
The ultimate goal is human flourishing; making life better for all
humans, and as the most conscious species, also promoting concern for
the welfare of other sentient beings and the planet as a whole.[81]
The focus is on doing good and living well in the here and now, and
leaving the world a better place for those who come after. In 1925, the
English mathematician and philosopher Alfred North Whitehead cautioned: "The prophecy of Francis Bacon
has now been fulfilled; and man, who at times dreamt of himself as a
little lower than the angels, has submitted to become the servant and
the minister of nature. It still remains to be seen whether the same
actor can play both parts".[82]
Humanistic psychology
Humanistic psychology is a psychological perspective which rose to prominence in the mid-20th century in response to Sigmund Freud's psychoanalytic theory and B. F. Skinner's Behaviorism. The approach emphasizes an individual's inherent drive towards self-actualization and creativity. Psychologists Carl Rogers and Abraham Maslow
introduced a positive, humanistic psychology in response to what they
viewed as the overly pessimistic view of psychoanalysis in the early
1960s. Other sources include the philosophies of existentialism and phenomenology.