Search This Blog

Tuesday, February 24, 2026

Speciesism

From Wikipedia, the free encyclopedia
 
The differential treatment of cows and dogs is an example of speciesism. Philosophers argue that members of the two species share similar interests and should be given equal consideration as a result, yet in many cultures cows are used as livestock and killed for food, while dogs are treated as companion animals.

Speciesism (/ˈspʃˌzɪzəm, -sˌzɪz-/) is a term used in philosophy regarding the treatment of individuals of different species. The term has several different definitions. Some specifically define speciesism as discrimination or unjustified treatment based on an individual's species membership, while others define it as differential treatment without regard to whether the treatment is justified or not. Richard D. Ryder, who coined the term, defined it as "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species". Speciesism results in the belief that humans have the right to use non-human animals in exploitative ways which is pervasive in the modern society. Studies from 2015 and 2019 suggest that people who support animal exploitation also tend to have intersectional bias that encapsulates and endorses racist, sexist, and other prejudicial views, which furthers the beliefs in human supremacy and group dominance to justify systems of inequality and oppression.

As a term, "speciesism" first appeared during a protest against animal experimentation in 1970. Philosophers and animal rights advocates state that speciesism plays a role in the animal–industrial complex, including in the practice of factory farming, animal slaughter, blood sports (such as bullfighting, cockfighting and rodeos), the taking of animals' fur and skin, and experimentation on animals, as well as the refusal to help animals suffering in the wild due to natural processes, and the categorization of certain animals as alien, non-naturalized, feral and invasive giving then the justification to their killing or culling based on these classifications.

Notable proponents of the concept include Peter Singer, Oscar Horta, Steven M. Wise, Gary L. Francione, Melanie Joy, David Nibert, Steven Best, and Ingrid Newkirk. Among academics, the ethics, morality, and concept of speciesism has been the subject of substantial philosophical debate.[25] Carl Cohen, Nel Noddings, Bernard Williams, Peter Staudenmaier, Christopher Grau, Douglas Maclean, Roger Scruton, Thomas Wells, and Robert Nozick have criticized the term or elements of it.

History

Preceding ideas

Early perspectives on animal sensation and kinship

Buffon, a French naturalist, writing in Histoire Naturelle in 1753, questioned whether it could be doubted that animals "whose organization is similar to ours, must experience similar sensations", and that "those sensations must be proportioned to the activity and perfection of their senses". Despite these assertions, he also maintained that there exists a gap between humans and other animals.

In the poem "Poème sur le désastre de Lisbonne", Voltaire described a kinship between sentient beings, humans and animals alike, writing: "All sentient things, born by the same stern law, / Suffer like me, and like me also die."

Jeremy Bentham

Jeremy Bentham has been identified as an early Western philosopher to advocate for animals' equal consideration within a comprehensive, secular moral framework. He argued that species membership is morally irrelevant and that any being capable of suffering has intrinsic value. In his 1789 book An Introduction to the Principles of Morals and Legislation, he wrote:

The day may come, when the rest of the animal creation may acquire those rights which never could have been withheld from them but by the hand of tyranny. ... [T]he question is not, Can they reason? nor, Can they talk? but, Can they suffer?

Bentham also supported animal welfare laws. At the same time, he accepted the killing and use of animals, provided that what he regarded as unnecessary cruelty was avoided.

Lewis Gompertz

Lewis Gompertz stressed shared human–animal feelings, sensations, needs and physiological characteristics.

In his 1824 work Moral Inquiries on the Situation of Man and of Brutes, English writer and early animal rights advocate Lewis Gompertz argued for egalitarianism, extending it to nonhuman animals. He stated that humans and animals have highly similar feelings and sensations, noting that experiences such as hunger, desire, fear and anger affect both in similar ways. Gompertz also pointed to shared physiological characteristics between humans and animals, suggesting a similarity in sensation. He criticized human use of animals, drawing attention to what he saw as a disregard for their feelings, needs and desires.

Charles Darwin

English naturalist Charles Darwin, writing in his notebook in 1838, observed that humans tend to regard themselves as masterpieces produced by a deity, but recorded his own view that it was "truer to consider him created from animals". In his 1871 book The Descent of Man, Darwin argued:

There is no fundamental difference between man and the higher mammals in their mental faculties ... [t]he difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind. We have seen that the senses and intuitions, the various emotions and faculties, such as love, memory, attention, curiosity, imitation, reason, etc., of which man boasts, may be found in an incipient, or even sometimes in a well-developed condition, in the lower animals.

Lewis H. Morgan

In 1843 Lewis H. Morgan published "Mind or Instinct: An Inquiry Concerning the Manifestation of Mind by the Lower Orders of Animals" in The Knickerbocker, where he used anecdotes such as dogs returning to surgeons, beavers building dams, ants storing grain and marmots posting lookouts to argue that animals display memory, foresight and reasoning. He rejected appeals to "instinct" as an explanation, suggesting instead that humans and other species share a common mental principle differing only in degree, and he questioned claims of human moral superiority while criticizing practices such as hunting for sport and killing animals for food. He developed these arguments in 1857 in an unpublished paper, "Animal Psychology", read to the Pundit Club in Rochester, New York, which again rejected instinct and attributed animal behavior to perception, memory, reflection, volition and reason. Morgan also speculated that animals might possess moral capacities and immortal souls, and he placed species on a "scale of gradation" of intelligence while remaining a creationist. Although little noticed at the time, the essay has been described in later scholarship as an unusually early critique of instinct within American comparative psychology.

Arthur Schopenhauer

German philosopher Arthur Schopenhauer criticized anthropocentrism as, in his view, a fundamental defect of Christianity and Judaism. He argued that these religions contributed to the suffering of sentient beings by separating humans from other animals and encouraging their treatment as mere things. By contrast, Schopenhauer praised Brahmanism and Buddhism for their focus on kinship between humans and other animals and for their teaching about a connection between them through metempsychosis.

Secular and utilitarian animal advocacy

Henry S. Salt criticized the idea that there exists a "great gulf" between humans and other animals.

According to historian Chien-Hui Li, some secularist thinkers in the late 19th and early 20th centuries argued for animals on utilitarian grounds and on the basis of evolutionary kinship, linking their views to a broader critique of Christian doctrines about suffering and social order. These secularists sought a morality independent of religious authority. Some initially supported vivisection for human benefit but later questioned its necessity. Figures such as G. W. Foote argued for broader utility, focusing on long-term moral principles rather than immediate gains. Drawing on evolutionary theories, they described common origins and similarities between humans and animals and argued that morality should extend to animals as beings capable of experiencing pain and pleasure. They rejected the idea of a theological gulf separating humans from animals and used contemporary scientific theories to support various proposals for animal rights and welfare.

British writer and animal rights advocate Henry S. Salt, in his 1892 book Animals' Rights, argued that for humans to do justice to other animals they must look beyond the conception of a "great gulf" between them, claiming instead that people should recognize the "common bond of humanity that unites all living beings in one universal brotherhood".

Edward Payson Evans, an American scholar and animal rights advocate, criticized anthropocentric psychology and ethics in his 1897 work Evolutional Ethics and Animal Psychology. He argued that such views treat humans as fundamentally different from other sentient beings, and he denied that this distinction removes all moral obligations toward animals. Evans held that Darwin's theory of evolution implies moral duties not only toward enslaved humans but also toward nonhuman animals. He asserted that beyond kind treatment, animals need enforceable rights to protect them from cruelty. Evans contended that recognizing kinship between humans and other sentient beings would make it impossible, in his view, to mistreat them.

An 1898 article in The Zoophilist, titled "Anthropocentric Ethics", argued that some early civilizations, prior to Christianity, regarded tenderness and mercy toward sentient beings as a moral requirement. It discussed Zarathustra, Buddha and early Greek philosophers, who practiced vegetarianism, as exemplifying this outlook. The article claimed that this understanding of human–animal kinship persisted into early Christianity but was challenged by figures such as Origen, who saw animals as mere automata for human use. It concluded that the relationship between animal psychology and evolutionary ethics was gaining scientific and moral attention and could no longer be ignored.

In 1895, American zoologist, philosopher and animal rights advocate J. Howard Moore described vegetarianism as the ethical result of recognizing the evolutionary kinship of all creatures, connecting his position with Darwin's insights. He criticized what he called the "pre-Darwinian delusion" that nonhuman animals were created for human use. In his 1899 book Better-World Philosophy, Moore argued that human ethics were still anthropocentric, having developed to include various human groups but not animals. He proposed "zoocentricism" as a further development, extending ethical concern to the entire sentient universe. In his 1906 book The Universal Kinship, Moore criticized what he described as a "provincialist" attitude leading to animal mistreatment, comparing it to denying ethical relations among human groups. He rejected what he saw as a human-centric perspective and urged consideration of the standpoint of animal victims. Moore concluded that the Golden Rule should apply to all sentient beings, advocating equal ethical consideration for animals and humans:

[D]o as you would be done by, and not to the dark man and the white woman alone, but to the sorrel horse and the gray squirrel as well; not to creatures of your own anatomy only, but to all creatures.

Coining of the term

Richard D. Ryder coined the term "speciesism" in 1970.

The term speciesism, and the argument that it is a prejudice, first appeared in 1970 in a privately printed pamphlet written by British psychologist Richard D. Ryder. Ryder was a member of a group of academics in Oxford, England, the nascent animal rights community, now known as the Oxford Group. One of the group's activities was distributing pamphlets about areas of concern; the pamphlet titled "Speciesism" was written to protest against animal experimentation. The term was intended by its proponents to create a rhetorical and categorical link to racism and sexism.

Ryder stated in the pamphlet that "[s]ince Darwin, scientists have agreed that there is no 'magical' essential difference between humans and other animals, biologically-speaking. Why then do we make an almost total distinction morally? If all organisms are on one physical continuum, then we should also be on the same moral continuum." He wrote that, at that time in the United Kingdom, 5,000,000 animals were being used each year in experiments, and that attempting to gain benefits for our own species through the mistreatment of others was "just 'speciesism' and as such it is a selfish emotional argument rather than a reasoned one". Ryder used the term again in an essay, "Experiments on Animals", in Animals, Men and Morals (1971), a collection of essays on animal rights edited by philosophy graduate students Stanley and Roslind Godlovitch and John Harris, who were also members of the Oxford Group. Ryder wrote:

In as much as both "race" and "species" are vague terms used in the classification of living creatures according, largely, to physical appearance, an analogy can be made between them. Discrimination on grounds of race, although most universally condoned two centuries ago, is now widely condemned. Similarly, it may come to pass that enlightened minds may one day abhor "speciesism" as much as they now detest "racism". The illogicality in both forms of prejudice is of an identical sort. If it is accepted as morally wrong to deliberately inflict suffering upon innocent human creatures, then it is only logical to also regard it as wrong to inflict suffering on innocent individuals of other species. ... The time has come to act upon this logic.

Spread of the idea

Peter Singer popularized the idea in Animal Liberation (1975).

The term was popularized by the Australian philosopher Peter Singer in his book Animal Liberation (1975). Singer had known Ryder from his own time as a graduate philosophy student at Oxford. He credited Ryder with having coined the term and used it in the title of his book's fifth chapter: "Man's Dominion ... a short history of speciesism", defining it as "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species":

Racists violate the principle of equality by giving greater weight to the interests of members of their own race when there is a clash between their interests and the interests of those of another race. Sexists violate the principle of equality by favouring the interests of their own sex. Similarly, speciesists allow the interests of their own species to override the greater interests of members of other species. The pattern is identical in each case.

Singer stated from a preference-utilitarian perspective, writing that speciesism violates the principle of equal consideration of interests, the idea based on Jeremy Bentham's principle: "each to count for one, and none for more than one". Singer stated that, although there may be differences between humans and nonhumans, they share the capacity to suffer, and we must give equal consideration to that suffering. Any position that allows similar cases to be treated in a dissimilar fashion fails to qualify as an acceptable moral theory. The term caught on; Singer wrote that it was an awkward word but that he could not think of a better one. It became an entry in the Oxford English Dictionary in 1985, defined as "discrimination against or exploitation of animal species by human beings, based on an assumption of mankind's superiority." In 1994 the Oxford Dictionary of Philosophy offered a wider definition: "By analogy with racism and sexism, the improper stance of refusing respect to the lives, dignity, or needs of animals of other than the human species."

Anti-speciesism movement

Anti-speciesism graffiti in Turin
 
2015 anti-speciesism protest in Montreal

The French-language journal Cahiers antispécistes ("Antispeciesist notebooks") was founded in 1991, by David Olivier, Yves Bonnardel and Françoise Blanchon, who were the first French activists to speak out against speciesism. The aim of the journal was to disseminate anti-speciesist ideas in France and to encourage debate on the topic of animal ethics, specifically on the difference between animal liberation and ecology. Estela Díaz and Oscar Horta assert that in Spanish-speaking countries, unlike English-speaking countries, anti-speciesism has become the dominant approach for animal advocacy. In Italy, the contemporary anti-speciesist movement has two main approaches: one that takes a strong, radical stance against the dominant societal norms represented by authors such as Adriano Fragano, author of the "Antispeciesist Manifesto", and another that aligns more with mainstream, neoliberal views.

In the 21st century, animal rights groups such as the Farm Animal Rights Movement and People for the Ethical Treatment of Animals have attempted to popularize the concept by promoting a World Day Against Speciesism on 5 June. The World Day for the End of Speciesism (WoDES) is a similar annual observance held at the end of August. The WoDES has been held annually since 2015.

Social psychology and relationship with other prejudices

Scholars including philosopher Peter Singer and botanist Brent Mishler have argued that speciesism is analogous to racism, the belief that some human races are superior to others.

In the 2019 book Why We Love and Exploit Animals, Kristof Dhont, Gordon Hodson, Ana C. Leite, and Alina Salmen reveal the psychological connections between speciesism and other prejudices such as racism and sexism. Marjetka Golež Kaučič connects racism and speciesism saying that discriminations based on race and species are strongly interrelated, with human rights providing the legal ground for the development of the animal rights. Kaučič further argues that racism and speciesism are further connected to issues of freedom, both collective and individual.

In one study, 242 participants responded to questions on the Speciesism Scale, and those who scored higher on this scale scored higher on racism, sexism, and homophobia scales. Other studies suggest that those who support animal exploitation also tend to endorse racist and sexist views, furthering the beliefs in human supremacy and group dominance in order to justify systems of inequality and oppression. It is suggested that the connection rests in the ideology of social dominance.

Psychologists have also considered examining speciesism as a specific psychological construct or attitude (as opposed to speciesism as a philosophy), which was achieved using a specifically designed Likert scale. Studies have found that speciesism is a stable construct that differs amongst personalities and correlates with other variables. For example, speciesism has been found to have a weak positive correlation with homophobia and right-wing authoritarianism, as well as slightly stronger correlations with political conservatism, racism and system justification. Moderate positive correlations were found with social dominance orientation and sexism. Social dominance orientation was theorised to be underpinning most of the correlations; controlling for social dominance orientation reduces all correlations substantially and renders many statistically insignificant.Speciesism likewise predicts levels of prosociality toward animals and behavioural food choices.

Those who state that speciesism is unfair to individuals of nonhuman species have often invoked mammals and chickens in the context of research or farming. There is not yet a clear definition or line agreed upon by a significant segment of the movement as to which species are to be treated equally with humans or in some ways additionally protected: mammals, birds, reptiles, arthropods, insects, bacteria, etc. This question is all the more complex since a study by Miralles et al. (2019) has brought to light the evolutionary component of human empathic and compassionate reactions and the influence of anthropomorphic mechanisms in our affective relationship with the living world as a whole: the more an organism is evolutionarily distant from us, the less we recognize ourselves in it and the less we are moved by its fate.

Some researchers have suggested that since speciesism could be considered, in terms of social psychology, a prejudice (defined as "any attitude, emotion, or behaviour toward members of a group, which directly or indirectly implies some negativity or antipathy toward that group"), then laypeople may be aware of a connection between it and other forms of "traditional" prejudice. Research suggests laypeople do indeed tend to infer similar personality traits and beliefs from a speciesist that they would from a racist, sexist or homophobe. However, it is not clear if there is a link between speciesism and non-traditional forms of prejudice such as negative attitudes towards the overweight or towards Christians.

Psychological studies have furthermore argued that people tend to "morally value individuals of certain species less than others even when beliefs about intelligence and sentience are accounted for". One study identified that there are age-related differences in moral views of animal worth, with children holding less speciesist beliefs than adults; the authors argue that such findings indicate that the development of speciesist beliefs is socially constructed over an individual's lifetime.

Relationship with the animal–industrial complex

Piers Beirne considers speciesism as the ideological anchor of the intersecting networks of the animal–industrial complex, such as factory farms, vivisection, hunting and fishing, zoos and aquaria, and wildlife trade. Amy Fitzgerald and Nik Taylor argue that the animal-industrial complex is both a consequence and cause of speciesism, which according to them is a form of discrimination similar to racism or sexism. They also argue that the obfuscation of meat's animal origins is a critical part of the animal–industrial complex under capitalist and neoliberal regimes. Speciesism results in the belief that humans have the right to use non-human animals, which is pervasive in the modern society.

Sociologist David Nibert states,

The profound cultural devaluation of other animals that permits the violence that underlies the animal industrial complex is produced by far-reaching speciesist socialization. For instance, the system of primary and secondary education under the capitalist system largely indoctrinates young people into the dominant societal beliefs and values, including a great deal of procapitalist and speciesist ideology. The devalued status of other animals is deeply ingrained; animals appear in schools merely as caged "pets", as dissection and vivisection subjects, and as lunch. On television and in movies, the unworthiness of other animals is evidenced by their virtual invisibility; when they do appear, they generally are marginalized, vilified, or objectified. Not surprisingly, these and numerous other sources of speciesism are so ideologically profound that those who raise compelling moral objections to animal oppression largely are dismissed, if not ridiculed.

Some scholars have argued that all kinds of animal production are rooted in speciesism, reducing animals to mere economic resources. Built on the production and slaughter of animals, the animal–industrial complex is perceived as the materialization of the institution of speciesism, with speciesism becoming "a mode of production". In his 2011 book Critical Theory and Animal Liberation, J. Sanbonmatsu argues that speciesism is not ignorance or the absence of a moral code towards animals, but is a mode of production and material system imbricated with capitalism.

Arguments in favor

Defenders of speciesism such as Carl Cohen argue that speciesism is essential for right conduct.

Philosopher Carl Cohen stated in 1986: "Speciesism is not merely plausible; it is essential for right conduct, because those who will not make the morally relevant distinctions among species are almost certain, in consequence, to misapprehend their true obligations." Cohen writes that racism and sexism are wrong because there are no relevant differences between the sexes or races. Between people and animals, he states, there are significant differences; his view is that animals do not qualify for Kantian personhood, and as such have no rights.

Nel Noddings, the American feminist, has criticized Singer's concept of speciesism for being simplistic, and for failing to take into account the context of species preference, as concepts of racism and sexism have taken into account the context of discrimination against humans. Peter Staudenmaier has stated that comparisons between speciesism and racism or sexism are trivializing:

The central analogy to the civil rights movement and the women's movement is trivializing and ahistorical. Both of those social movements were initiated and driven by members of the dispossessed and excluded groups themselves, not by benevolent men or white people acting on their behalf. Both movements were built precisely around the idea of reclaiming and reasserting a shared humanity in the face of a society that had deprived it and denied it. No civil rights activist or feminist ever argued, "We're sentient beings too!" They argued, "We're fully human too!" Animal liberation doctrine, far from extending this humanist impulse, directly undermines it.

A similar argument was made by Bernard Williams, who observed that a difference between speciesism versus racism and sexism is that racists and sexists deny any input from those of a different race or sex when it comes to questioning how they should be treated. Conversely, when it comes to how animals should be treated by humans, Williams observed that it is only possible for humans to discuss that question. Williams observed that being a human being is often used as an argument against discrimination on the grounds of race or sex, whereas racism and sexism are seldom deployed to counter discrimination.

Williams also stated in favour of speciesism (which he termed 'humanism'), arguing that "Why are fancy properties which are grouped under the label of personhood "morally relevant" to issues of destroying a certain kind of animal, while the property of being a human being is not?" Williams states that to respond by arguing that it is because these are properties considered valuable by human beings does not undermine speciesism as humans also consider human beings to be valuable, thus justifying speciesism. Williams then states that the only way to resolve this would be by arguing that these properties are "simply better" but in that case, one would need to justify why these properties are better if not because of human attachment to them. Christopher Grau supported Williams, arguing that if one used properties like rationality, sentience and moral agency as criteria for moral status as an alternative to species-based moral status, then it would need to be shown why these particular properties are to be used instead of others; there must be something that gives them special status. Grau states that to claim these are simply better properties would require the existence of an impartial observer, an "enchanted picture of the universe", to state them to be so. Thus, Grau states that such properties have no greater justification as criteria for moral status than being a member of a species does. Grau also states that even if such an impartial perspective existed, it still would not necessarily be against speciesism, since it is entirely possible that there could be reasons given by an impartial observer for humans to care about humanity. Grau then further observes that if an impartial observer existed and valued only minimalizing suffering, it would likely be overcome with horror at the suffering of all individuals and would rather have humanity annihilate the planet than allow it to continue. Grau thus concludes that those endorsing the idea of deriving values from an impartial observer do not seem to have seriously considered the conclusions of such an idea.

Douglas Maclean agreed that Singer raised important questions and challenges, particularly with his argument from marginal cases. However, Maclean questioned if different species can be fitted with human morality, observing that animals were generally held exempt from morality; Maclean notes that most people would try to stop a man kidnapping and killing a woman but would regard a hawk capturing and killing a marmot with awe and criticise anyone who tried to intervene. Maclean thus suggests that morality only makes sense under human relations, with the further one gets from it, the less it can be applied.

The British philosopher Roger Scruton regards the emergence of the animal rights and anti-speciesism movement as "the strangest cultural shift within the liberal worldview", because the idea of rights and responsibilities is, he states, distinctive to the human condition, and it makes no sense to spread them beyond our own species. Scruton argues that if animals have rights, then they also have duties, which animals would routinely violate, such as by breaking laws or killing other animals. He accuses anti-speciesism advocates of "pre-scientific" anthropomorphism, attributing traits to animals that are, he says, Beatrix Potter-like, where "only man is vile". It is, he states, a fantasy, a world of escape.

Thomas Wells states that Singer's call for ending animal suffering would justify simply exterminating every animal on the planet in order to prevent the numerous ways in which they suffer, as they could no longer feel any pain. Wells also stated that by focusing on the suffering humans inflict on animals and ignoring suffering animals inflict upon themselves or that inflicted by nature, Singer is creating a hierarchy where some suffering is more important than others, despite claiming to be committed to equality of suffering. Wells also states that the capacity to suffer, Singer's criteria for moral status, is one of degree rather than absolute categories; Wells observes that Singer denies moral status to plants on the grounds they cannot subjectively feel anything (even though they react to stimuli), yet Wells alleges there is no indication that nonhuman animals feel pain and suffering the way humans do.

Robert Nozick notes that if species membership is irrelevant, then this would mean that endangered animals have no special claim.

The Rev. John Tuohey, founder of the Providence Center for Health Care Ethics, writes that the logic behind the anti-speciesism critique is flawed, and that, although the animal rights movement in the United States has been influential in slowing animal experimentation, and in some cases halting particular studies, no one has offered a compelling argument for species equality.

Arguments against

Moral community, argument from marginal cases

The Trial of Bill Burns (1838) in London showing Richard Martin (MP for Galway) in court with a donkey beaten by his owner, leading to Europe's first known conviction for animal cruelty

Paola Cavalieri writes that the current humanist paradigm is that only human beings are members of the moral community and that all are worthy of equal protection. Species membership, she writes, is ipso facto moral membership. The paradigm has an inclusive side (all human beings deserve equal protection) and an exclusive one (only human beings have that status).

Nonhumans do possess some moral status in many societies, but it generally extends only to protection against what Cavalieri calls "wanton cruelty". Anti-speciesists state that the extension of moral membership to all humanity, regardless of individual properties such as intelligence, while denying it to nonhumans, also regardless of individual properties, is internally inconsistent. According to the argument from marginal cases, if infants, the senile, the comatose, and the cognitively disabled (marginal-case human beings) have a certain moral status, then nonhuman animals must be awarded that status too since there is no morally relevant ability that the marginal-case humans have that nonhumans lack.

American legal scholar Steven M. Wise states that speciesism is a bias as arbitrary as any other. He cites the philosopher R.G. Frey (1941–2012), a leading animal rights critic, who wrote in 1983 that, if forced to choose between abandoning experiments on animals and allowing experiments on "marginal-case" humans, he would choose the latter, "not because I begin a monster and end up choosing the monstrous, but because I cannot think of anything at all compelling that cedes all human life of any quality greater value than animal life of any quality."

"Discontinuous mind"

Richard Dawkins argues that speciesism is an example of the "discontinuous mind".

Richard Dawkins, the evolutionary biologist, wrote against speciesism in The Blind Watchmaker (1986), The Great Ape Project (1993), and The God Delusion (2006), elucidating the connection with evolutionary theory. He compares former racist attitudes and assumptions to their present-day speciesist counterparts. In the chapter "The one true tree of life" in The Blind Watchmaker, he states that it is not only zoological taxonomy that is saved from awkward ambiguity by the extinction of intermediate forms but also human ethics and law. Dawkins states that what he calls the "discontinuous mind" is ubiquitous, dividing the world into units that reflect nothing but our use of language, and animals into discontinuous species:

The director of a zoo is entitled to "put down" a chimpanzee that is surplus to requirements, while any suggestion that he might "put down" a redundant keeper or ticket-seller would be greeted with howls of incredulous outrage. The chimpanzee is the property of the zoo. Humans are nowadays not supposed to be anybody's property, yet the rationale for discriminating against chimpanzees is seldom spelled out, and I doubt if there is a defensible rationale at all. Such is the breathtaking speciesism of our Christian-inspired attitudes, the abortion of a single human zygote (most of them are destined to be spontaneously aborted anyway) can arouse more moral solicitude and righteous indignation than the vivisection of any number of intelligent adult chimpanzees! ... The only reason we can be comfortable with such a double standard is that the intermediates between humans and chimps are all dead.

Dawkins elaborated in a discussion with Singer at The Center for Inquiry in 2007 when asked whether he continues to eat meat: "It's a little bit like the position which many people would have held a couple of hundred years ago over slavery. Where lots of people felt morally uneasy about slavery but went along with it because the whole economy of the South depended upon slavery."

Centrality of consciousness

"Libertarian extension" is the idea that the intrinsic value of nature can be extended beyond sentient beings. This seeks to apply the principle of individual rights not only to all animals but also to objects without a nervous system such as trees, plants, and rocks. Ryder rejects this argument, writing that "value cannot exist in the absence of consciousness or potential consciousness. Thus, rocks and rivers and houses have no interests and no rights of their own. This does not mean, of course, that they are not of value to us, and to many other [beings who experience pain], including those who need them as habitats and who would suffer without them."

Comparisons to the Holocaust

David Sztybel states in his paper, "Can the Treatment of Animals Be Compared to the Holocaust?" (2006), that the racism of the Nazis is comparable to the speciesism inherent in eating meat or using animal by-products, particularly those produced on factory farms. Y. Michael Barilan, an Israeli physician, states that speciesism is not the same thing as Nazi racism, because the latter extolled the abuser and condemned the weaker and the abused. He describes speciesism as the recognition of rights on the basis of group membership, rather than solely on the basis of moral considerations.

Law and policy

Law

The first major statute addressing animal protection in the United States, titled "An Act for the More Effectual Prevention of Cruelty to Animals", was enacted in 1867. It provided the right to incriminate and enforce protection with regards to animal cruelty. The act, which has since been revised to suit modern cases state by state, originally addressed such things as animal neglect, abandonment, torture, fighting, transport, impound standards and licensing standards. Although an animal rights movement had already started as early as the late 1800s, some of the laws that would shape the way animals would be treated as industry grew, were enacted around the same time that Richard Ryder was bringing the notion of Speciesism to the conversation. Legislation was being proposed and passed in the U.S. that would reshape animal welfare in industry and science. Bills such as Humane Slaughter Act, which was created to alleviate some of the suffering felt by livestock during slaughter, was passed in 1958. Later the Animal Welfare Act of 1966, passed by the 89th United States Congress and signed into law by President Lyndon B. Johnson, was designed to put much stricter regulations and supervisions on the handling of animals used in laboratory experimentation and exhibition but has since been amended and expanded. These groundbreaking laws foreshadowed and influenced the shifting attitudes toward nonhuman animals in their rights to humane treatment which Richard D. Ryder and Peter Singer would later popularize in the 1970s and 1980s.

Great ape personhood

Great ape personhood is the idea that the attributes of non-human great apes are such that their sentience and personhood should be recognized by the law, rather than simply protecting them as a group under animal cruelty legislation. Awarding personhood to nonhuman primates would require that their individual interests be taken into account.

Observances

The World Day for the End of Speciesism (WoDES) is an international event aimed at denouncing speciesism, held annually at the end of August since 2015. The observance was initiated in 2015 by members of the Swiss association Pour l'Egalité Animale (PEA), which coordinates the international day annually, providing support aids.

The "World Day Against Speciesism" is observed annually on June 5.

Artificial intelligence in healthcare

X-ray of a hand, with automatic calculation of bone age by a computer software

Artificial intelligence in healthcare is the application of artificial intelligence (AI) to analyze and understand complex medical and healthcare data. In some cases, it can exceed or augment human capabilities by providing better or faster ways to diagnose, treat, or prevent disease.

As the widespread use of artificial intelligence in healthcare is still relatively new, research is ongoing into its applications across various medical subdisciplines and related industries. AI programs are being applied to practices such as diagnosticstreatment protocol development, drug developmentpersonalized medicine, and patient monitoring and care. Since radiographs are the most commonly performed imaging tests in radiology, the potential for AI to assist with triage and interpretation of radiographs is particularly significant.

Using AI in healthcare presents unprecedented ethical concerns related to issues such as data privacy, automation of jobs, and amplifying already existing algorithmic bias. New technologies such as AI are often met with resistance by healthcare leaders, leading to slow and erratic adoption. There have been cases where AI has been put to use in healthcare without proper testing. A systematic review and thematic analysis in 2023 showed that most stakeholders including health professionals, patients, and the general public doubted that care involving AI could be empathetic. Meta-studies have found that the scientific literature on AI in healthcare often suffers from a lack of reproducibility.

Applications in healthcare systems

Disease diagnosis

Accurate and early diagnosis of diseases is still a challenge in healthcare. Recognizing medical conditions and their symptoms is a complex problem. AI can assist clinicians with data processing capabilities to save time and improve accuracy. Through the use of machine learning, artificial intelligence can substantially aid doctors in patient diagnosis through the analysis of mass electronic health records (EHRs). AI can help early prediction, for example, of Alzheimer's disease and dementias, by looking through large numbers of similar cases and possible treatments.

In 2023 a study reported higher satisfaction rates with ChatGPT-generated responses compared with those from physicians for medical questions posted on Reddit's r/AskDocs. Evaluators preferred ChatGPT's responses to physician responses in 78.6% of 585 evaluations, noting better quality and empathy. The authors noted that these were isolated questions taken from an online forum, not in the context of an established patient-physician relationship. Moreover, responses were not graded on the accuracy of medical information, and some have argued that the experiment was not properly blinded, with the evaluators being coauthors of the study.

Large healthcare-related data warehouses of sometimes hundreds of millions of patients have been used as training data for AI models.

A 2025 meta-analysis in PLOS One found that the use of AI algorithms for detecting tooth decay was clinically justified.

Electronic health records

Electronic health records (EHR) are crucial to the digitalization and information spread of the healthcare industry. Now that around 80% of medical practices use EHR, some anticipate the use of artificial intelligence to interpret the records and provide new information to physicians.

One application uses natural language processing (NLP) to make more succinct reports that limit the variation between medical terms by matching similar medical terms. For example, the term heart attack and myocardial infarction mean the same things, but physicians may use one over the other based on personal preferences. NLP algorithms consolidate these differences so that larger datasets can be analyzed. Another use of NLP identifies phrases that are redundant due to repetition in a physician's notes and keeps the relevant information to make it easier to read. Other applications use concept processing to analyze the information entered by the current patient's doctor to present similar cases and help the physician remember to include all relevant details.

Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient's record and predict a risk for a disease based on their previous information and family history. One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts. This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses. Thus, the algorithm can take in a new patient's data and try to predict the likelihood that they will have a certain condition or disease. Since the algorithms can evaluate a patient's information based on collective data, they can find any outstanding issues to bring to a physician's attention and save time. One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response. These methods are helpful due to the fact that the amount of online health records doubles every five years. Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients.

AlphaFold and drug discovery

AlphaFold has the ability to predict protein structures based on the constituent amino acid sequence, expected to have benefits in the life sciences--accelerating drug discovery and enabling better understanding of diseases. Nobel laureate Venki Ramakrishnan called the result "a stunning advance on the protein folding problem", adding that "It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research." In 2023, Demis Hassabis and John Jumper won the Breakthrough Prize in Life Sciences as well as the Albert Lasker Award for Basic Medical Research for their management of the AlphaFold project. Hassabis and Jumper proceeded to win the Nobel Prize in Chemistry in 2024 for their work on "protein structure prediction" with David Baker of the University of Washington.

Drug interactions

Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken. To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature. Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were. Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.

Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization's VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.

Telemedicine

The increase of telemedicine, the treatment of patients remotely, has shown the rise of possible AI applications. AI can assist in caring for patients remotely by monitoring their information through sensors. A wearable device may allow for constant monitoring of a patient and the ability to notice changes that may be less distinguishable by humans. The information can be compared to other data that has already been collected using artificial intelligence algorithms that alert physicians if there are any issues to be aware of.

A 2025 systematic review and meta-analysis of 15 studies comparing AI chatbots with human healthcare professionals in text-based consultations found that in a large majority of studies participants rated chatbot responses as more empathic than those from clinicians.

Another application of artificial intelligence is chat-bot therapy. Some researchers charge that the reliance on chatbots for mental healthcare does not offer the reciprocity and accountability of care that should exist in the relationship between the consumer of mental healthcare and the care provider (be it a chat-bot or psychologist), though. Some examples of these chatbots include Woebot, Earkick and Wysa.

Since the average age has risen due to a longer life expectancy, artificial intelligence could be useful in helping take care of older populations. Tools such as environment and personal sensors can identify a person's regular activities and alert a caretaker if a behavior or a measured vital is abnormal. Although the technology is useful, there are also discussions about limitations of monitoring in order to respect a person's privacy since there are technologies that are designed to map out home layouts and detect human interactions.

Workload management

AI has the potential to streamline care coordination and reduce the workload. AI algorithms can automate administrative tasks, prioritize patient needs, and facilitate seamless communication in a healthcare team.

Clinical applications

Cardiovascular

Artificial intelligence algorithms have shown promising results in accurately diagnosing and risk stratifying patients with concern for coronary artery disease, showing potential as an initial triage tool. Other algorithms have been used in predicting patient mortality, medication effects, and adverse events following treatment for acute coronary syndrome. Wearables, smartphones, and internet-based technologies have also shown the ability to monitor patients' cardiac data points, expanding the amount of data and the various settings AI models can use and potentially enabling earlier detection of cardiac events occurring outside of the hospital. A study in 2019 found that AI can be used to predict heart attack with up to 90% accuracy. Another growing area of research is the utility of AI in classifying heart sounds and diagnosing valvular disease. Challenges of AI in cardiovascular medicine have included the limited data available to train machine learning models, such as limited data on social determinants of health as they pertain to cardiovascular disease.

A key limitation in early studies evaluating AI were omissions of data comparing algorithmic performance to humans. Examples of studies which assess AI performance relative to physicians includes how AI is non-inferior to humans in interpretation of cardiac echocardiograms and that AI can diagnose heart attack better than human physicians in the emergency setting, reducing both low-value testing and missed diagnoses.

In cardiovascular tissue engineering and organoid studies, AI is increasingly used to analyze microscopy images, and integrate electrophysiological read outs.

Dermatology

Medical imaging (such as X-ray and photography) is a commonly used tool in dermatology and the development of deep learning has been strongly tied to image processing.

A woman taking a photo of her scalp for AI-based analysis of hair loss

Han et al. showed keratinocytic skin cancer detection from face photographs. Esteva et al. demonstrated dermatologist-level classification of skin cancer from lesion images. Noyan et al. demonstrated a convolutional neural network that achieved 94% accuracy at identifying skin cells from microscopic Tzanck smear images. Concerns have been raised, however, regarding the limited diversity of datasets, particularly the underrepresentation of darker skin tones, which may reduce generalizability across populations.

In addition to skin cancer detection and analysis of tissue samples of histological smears, AI has been used in chronic and aesthetic dermatology.

MDalgorithms has developed a mobile application MDacne which uses machine learning to grade acne severity from smartphone selfies and generate treatment regimens. AI has also been used to diagnose inflammatory skin conditions such as rosacea, where an AI tool was reported to achieve accuracy rates of approximately 88–90% in identifying the disorder. MDhair applies AI analysis to scalp photographs to personalize hair loss treatments, with clinical trials reporting reductions in shedding, increased density, and improved scalp hydration.

A large study involving over one million individuals describes the use of AI-based systems in collecting demographic and clinical data on skin and hair health, enabling the identification of population-level trends.

According to some researchers, AI algorithms have been shown to be more effective than dermatologists at identifying cancer. However, a 2021 review article found that a majority of papers analyzing the performance of AI algorithms designed for skin cancer classification failed to use external test sets. Only four research studies were found in which the AI algorithms were tested on clinics, regions, or populations distinct from those it was trained on, and in each of those four studies, the performance of dermatologists was found to be on par with that of the algorithm. Moreover, only one study was set in the context of a full clinical examination; others were based on interaction through web-apps or online questionnaires, with most based entirely on context-free images of lesions. In this study, it was found that dermatologists significantly outperformed the algorithms. Many articles claiming superior performance of AI algorithms also fail to distinguish between trainees and board-certified dermatologists in their analyses.

It has also been suggested that AI could be used to automatically evaluate the outcome of maxillo-facial surgery or cleft palate therapy in regard to facial attractiveness or age appearance.

Gastroenterology

AI can play a role in various facets of the field of gastroenterology. Endoscopic exams such as esophagogastroduodenoscopies (EGD) and colonoscopies rely on rapid detection of abnormal tissue. By enhancing these endoscopic procedures with AI, clinicians can more rapidly identify diseases, determine their severity, and visualize blind spots. Early trials in using AI detection systems of early stomach cancer have shown sensitivity close to expert endoscopists.

AI can assist doctors treating ulcerative colitis in detecting the microscopic activity of the disease in people and predicting when flare-ups will happen. For example, an AI-powered tool was developed to analyse digitised bowel samples (biopsies). The tool was able to distinguish with 80% accuracy between samples that show remission of colitis and those with active disease. It also predicted the risk of a flare-up happening with the same accuracy. These rates of successfully using microscopic disease activity to predict disease flare are similar to the accuracy of pathologists.

Infectious diseases

AI has shown potential in both the laboratory and clinical spheres of infectious disease medicine. During the COVID-19 pandemic, AI has been used for early detection, tracking virus spread and analysing virus behaviour, among other things. However, there were only a few examples of AI being used directly in clinical practice during the pandemic itself.

Other applications of AI around infectious diseases include support-vector machines identifying antimicrobial resistance, machine learning analysis of blood smears to detect malaria, and improved point-of-care testing of Lyme disease based on antigen detection. Additionally, AI has been investigated for improving diagnosis of meningitis, sepsis, and tuberculosis, as well as predicting treatment complications in hepatitis B and hepatitis C patients.

Musculoskeletal

AI has been used to identify causes of knee pain that doctors miss, that disproportionately affect Black patients. Underserved populations experience higher levels of pain. These disparities persist even after controlling for the objective severity of diseases like osteoarthritis, as graded by human physicians using medical images, raising the possibility that underserved patients' pain stems from factors external to the knee, such as stress. Researchers have conducted a study using a machine-learning algorithm to show that standard radiographic measures of severity overlook objective but undiagnosed features that disproportionately affect diagnosis and management of underserved populations with knee pain. They proposed that new algorithmic measure ALG-P could potentially enable expanded access to treatments for underserved patients.

Neurology

The use of AI technologies has been explored for use in the diagnosis and prognosis of Alzheimer's disease (AD). For diagnostic purposes, machine learning models have been developed that rely on structural MRI inputs. The input datasets for these models are drawn from databases such as the Alzheimer's Disease Neuroimaging Initiative. Researchers have developed models that rely on convolutional neural networks with the aim of improving early diagnostic accuracy. Generative adversarial networks are a form of deep learning that have also performed well in diagnosing AD. There have also been efforts to develop machine learning models into forecasting tools that can predict the prognosis of patients with AD. Forecasting patient outcomes through generative models has been proposed by researchers as a means of synthesizing training and validation sets. They suggest that generated patient forecasts could be used to provide future models larger training datasets than current open access databases.

Oncology

AI has been explored for use in cancer diagnosis, risk stratification, molecular characterization of tumors, and cancer drug discovery. A particular challenge in oncologic care that AI is being developed to address is the ability to accurately predict which treatment protocols will be best suited for each patient based on their individual genetic, molecular, and tumor-based characteristics. AI has been trialed in cancer diagnostics with the reading of imaging studies and pathology slides.

In January 2020, Google DeepMind announced an algorithm capable of surpassing human experts in breast cancer detection in screening scans. A number of researchers, including Trevor Hastie, Joelle Pineau, and Robert Tibshirani among others, published a reply claiming that DeepMind's research publication in Nature lacked key details on methodology and code, "effectively undermin[ing] its scientific value" and making it impossible for the scientific community to confirm the work. In the MIT Technology Review, author Benjamin Haibe-Kains characterized DeepMind's work as "an advertisement" having little to do with science.

In July 2020, it was reported that an AI algorithm developed by the University of Pittsburgh achieves the highest accuracy to date in identifying prostate cancer, with 98% sensitivity and 97% specificity. In 2023 a study reported the use of AI for CT-based radiomics classification at grading the aggressiveness of retroperitoneal sarcoma with 82% accuracy compared with 44% for lab analysis of biopsies.

Ophthalmology

Artificial intelligence-enhanced technology is being used as an aid in the screening of eye disease and prevention of blindness. In 2018, the U.S. Food and Drug Administration authorized the marketing of the first medical device to diagnose a specific type of eye disease, diabetic retinopathy using an artificial intelligence algorithm. Moreover, AI technology may be used to further improve "diagnosis rates" because of the potential to decrease detection time.

Pathology

Ki67 stain calculation by the open-source software QuPath in a pure seminoma, which gives a measure of the proliferation rate of the tumor. The colors represent the intensity of expression: blue-no expression, yellow-low, orange-moderate, and red-high expression.

For many diseases, pathological analysis of cells and tissues is considered to be the gold standard of disease diagnosis. Methods of digital pathology allow microscopy slides to be scanned and digitally analyzed. AI-assisted pathology tools have been developed to assist with the diagnosis of a number of diseases, including breast cancer, hepatitis B, gastric cancer, and colorectal cancer. AI has also been used to predict genetic mutations and prognosticate disease outcomes. AI is well-suited for use in low-complexity pathological analysis of large-scale screening samples, such as colorectal or breast cancer screening, thus lessening the burden on pathologists and allowing for faster turnaround of sample analysis. Several deep learning and artificial neural network models have shown accuracy similar to that of human pathologists, and a study of deep learning assistance in diagnosing metastatic breast cancer in lymph nodes showed that the accuracy of humans with the assistance of a deep learning program was higher than either the humans alone or the AI program alone. Additionally, implementation of digital pathology is predicted to save over $12 million for a university center over the course of five years, though savings attributed to AI specifically have not yet been widely researched. The use of augmented and virtual reality could prove to be a stepping stone to wider implementation of AI-assisted pathology, as they can highlight areas of concern on a pathology sample and present them in real-time to a pathologist for more efficient review. AI also has the potential to identify histological findings at levels beyond what the human eye can see, and has shown the ability to use genotypic and phenotypic data to more accurately detect the tumor of origin for metastatic cancer. One of the major current barriers to widespread implementation of AI-assisted pathology tools is the lack of prospective, randomized, multi-center controlled trials in determining the true clinical utility of AI for pathologists and patients, highlighting a current area of need in AI and healthcare research.

Pharmacy

In pharmacy, AI helps discover, develop and deliver medications, and can enhance patient care through personalized treatment plans.

Primary care

Primary care has become one key development area for AI technologies. AI in primary care has been used for supporting decision making, predictive modeling, and business analytics. There are only a few examples of AI decision support systems that were prospectively assessed on clinical efficacy when used in practice by physicians. But there are cases where the use of these systems yielded a positive effect on treatment choice by physicians.

As of 2022 in relation to elder care, AI robots had been helpful in guiding older residents living in assisted living with entertainment and company. These bots are allowing staff in the home to have more one-on-one time with each resident, but the bots are also programmed with more ability in what they are able to do; such as knowing different languages and different types of care depending on the patient's conditions. The bot is an AI machine, which means it goes through the same training as any other machine - using algorithms to parse the given data, learn from it and predict the outcome in relation to what situation is at hand.

Psychiatry and psychology

People have used AI chatbots such as ChatGPT as a replacement for therapy when such therapy is unaffordable or inaccessible. Chatbots used for this purpose lack boundaries and are unregulated. This can lead to a risk of unhealthy attachment from users and coincides with chatbot psychosis. AI chatbots, including those used as a substitute for therapy, have provided harmful advice which has led to death. This has led to some government and institutional efforts to regulate or outlaw chatbot therapy.

Chatbots have been studied as a way to treat anxiety and depression. Although AI applications have been developed and proposed for screening for suicidal ideation, legal and privacy issues and public opposition has limited implementation. Small training datasets contain bias that is inherited by the models, and compromises the generalizability and stability of these models. Such models may also have the potential to be discriminatory against minority groups that are underrepresented in samples.

In 2023, US-based National Eating Disorders Association replaced its human helpline staff with a chatbot but had to take it offline after users reported receiving harmful advice from it.

Radiology

AI is being studied within the field of radiology to detect and diagnose diseases through computerized tomography (CT) and magnetic resonance (MR) imaging. It may be particularly useful in settings where demand for human expertise exceeds supply, or where data is too complex to be efficiently interpreted by human readers. Several deep learning models have shown the capability to be roughly as accurate as healthcare professionals in identifying diseases through medical imaging, though few of the studies reporting these findings have been externally validated. AI can also provide non-interpretive benefit to radiologists, such as reducing noise in images, creating high-quality images from lower doses of radiation, enhancing MR image quality, and automatically assessing image quality. Further research investigating the use of AI in nuclear medicine focuses on image reconstruction, anatomical landmarking, and the enablement of lower doses in imaging studies. The analysis of images for supervised AI applications in radiology encompasses two primary techniques at present: (1) convolutional neural network-based analysis; and (2) utilization of radiomics.

AI is also used in breast imaging for analyzing screening mammograms and can participate in improving breast cancer detection rate as well as reducing radiologist's reading workload.

As of 2025, 77% (967 out of 1247) of all FDA-approved AI-enabled medical devices are in radiology.

Industry

The trend of large health companies merging has allowed for greater health data accessibility. Greater health data have laid the groundwork to implement AI algorithms.

A large part of industry focus has been in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions. Numerous companies have been exploring the possibilities of the incorporation of big data in the healthcare industry, many of whom have been investigating market opportunities through "data assessment, storage, management, and analysis technologies". With the market for AI expanding, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies.

Large companies

The following are examples of large companies that are contributing to AI algorithms for use in healthcare:

  • Intel's venture capital arm Intel Capital invested in 2016 in the startup Lumiata, which uses AI to identify at-risk patients and develop care options.
  • Siemens Healthineers applies AI in imaging and diagnostics, including algorithms to reconstruct CT images and guide ultrasound procedures. It also uses AI to support treatment planning such as radiation therapy for cancer, improve point-of-care diagnostics, and automate lab workflows.
  • Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
  • Philips Healthcare develops AI-powered diagnostic tools that analyze medical images to detect subtle anomalies. Its AI technologies also support precision oncology by assisting pathologists in cancer diagnosis, care management, and patient monitoring.

Smaller companies, applications

Elon Musk premiering the surgical robot that implants the Neuralink brain chip

Neuralink has come up with a next-generation neuroprosthetic which intricately interfaces with thousands of neural pathways in the brain. Their process allows a chip, roughly the size of a quarter, to be inserted in the place of a chunk of a skull by a precision surgical robot to avoid accidental injury.

Tencent has been working on several medical systems and services. These include AI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service; WeChat Intelligent Healthcare; and Tencent Doctorwork

Heidi Health develops an AI-powered medical scribe that transcribes clinician–patient conversations in real time and generates structured clinical notes. It has been adopted in multiple health systems, including MaineGeneral Health in the US. Another high-profile startup, Suki AI, offers an ambient AI documentation assistant that listens to clinical encounters, automatically generates clinical notes, and integrates them into electronic health records. These startups have been discussed as part of a broader market of AI ambient scribes focused on reducing clinician documentation workload.

Another example, this time in the medical imaging space, is Optellum, an Oxford-based medical technology spin-out that provides an artificial intelligence decision-support system for assessing incidental pulmonary nodules on CT scans and supporting earlier lung cancer diagnosis.

The Indian startup Haptik developed a WhatsApp chatbot in 2021 which answered questions associated with COVID-19 in India. Similarly, a software platform ChatBot in partnership with health technology startup Infermedica launched COVID-19 Risk Assessment ChatBot.

Expanding care to developing nations

Artificial intelligence continues to expand in its abilities to diagnose more people accurately in nations where fewer doctors are accessible to the public.  Many new technology companies such as SpaceX and the Raspberry Pi Foundation have enabled more developing countries to have access to computers and the internet than ever before. With the increasing capabilities of AI over the internet, advanced machine learning algorithms can allow patients to get accurately diagnosed when they would previously have no way of knowing if they had a life-threatening disease or not.

Using AI in developing nations that do not have the resources will diminish the need for outsourcing and can improve patient care. AI can allow for not only diagnosis of patient in areas where healthcare is scarce, but also allow for a good patient experience by resourcing files to find the best treatment for a patient. The ability of AI to adjust course as it goes also allows the patient to have their treatment modified based on what works for them; a level of individualized care that is nearly non-existent in developing countries.

Regulation

Challenges of the clinical use of AI have brought about a potential need for regulations. AI studies need to be completely and transparently reported to have value to inform regulatory approval. Depending on the phase of study, international consensus-based reporting guidelines (TRIPOD+AI, DECIDE-AI, CONSORT-AI) have been developed to provide recommendations on the key details that need to be reported.

A man speaking at the GDPR compliance workshop at the 2019 Entrepreneurship Summit

While regulations exist pertaining to the collection of patient data such as the Health Insurance Portability and Accountability Act in the US (HIPAA) and the European General Data Protection Regulation (GDPR) pertaining to patients within the EU, health care AI is "severely under-regulated worldwide" as of 2025. Unclear is whether healthcare AI can be classified merely as software or as medical device.

United Nations (WHO/ITU)

The ITU-WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) has built a platform known as the ITU-WHO AI for Health Framework for the testing and benchmarking of AI applications in health domain as a joint endeavor of ITU and WHO. As of November 2018, eight use cases were being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.

USA

United States Food & Drug Administration

In 2015, the Office for Civil Rights (OCR) issued rules and regulations to protect the privacy of individuals' health information, requiring healthcare providers to follow certain privacy rules when using AI, to keep a record of how they use AI and to ensure that their AI systems are secure.

In May 2016, the White House announced its plan to host a series of workshops and formation of the National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence. In October 2016, the group published The National Artificial Intelligence Research and Development Strategic Plan, outlining its proposed priorities for Federally-funded AI research and development (within government and academia). The report notes a strategic R&D plan for the subfield of health information technology was in development stages.

In January 2021, the US FDA published a new Action Plan, entitled Artificial Intelligence (AI) /Machine Learning (ML)-Based Software as a Medical Device (SaMD) Action Plan. It laid out the FDA's future plans for regulation of medical devices that would include artificial intelligence in their software with five main actions: 1. Tailored Regulatory Framework for Ai/M:-based SaMD, 2. Good Machine Learning Practice (GMLP), 3. Patient-Centered Approach Incorporating Transparency to Users, 4. Regulatory Science Methods Related to Algorithm Bias & Robustness, and 5. Real-World Performance(RWP). This plan was in direct response to stakeholders' feedback on a 2019 discussion paper also published by the FDA.

Under President Biden the DHSS and the National Institute of Standards and Technology were instructed to develop regulation of healthcare AI. According to the U.S. Department of Health and Human Services, the OCR issued guidance on the ethical use of AI in healthcare in 2021. It outlined four core ethical principles that must be followed: respect for autonomy, beneficence, non-maleficence, and justice. Respect for autonomy requires that individuals have control over their own data and decisions. Beneficence requires that AI be used to do good, such as improving the quality of care and reducing health disparities. Non-maleficence requires that AI be used to do no harm, such as avoiding discrimination in decisions. Finally, justice requires that AI be used fairly, such as using the same standards for decisions no matter a person's race, gender, or income level. As of March 2021, the OCR had hired a Chief Artificial Intelligence Officer (OCAIO) to pursue the "implementation of the HHS AI strategy".

With the second Trump administration deregulation of health AI began on January 20, 2025 with merely voluntary standards for collecting and sharing data, statutory definitions for algorithmic discrimination, automation bias, and equity being cancelled, cuts to NIST and 19% of FDA workforce eliminated.

Europe

Other countries have implemented data protection regulations, more specifically with company privacy invasions. In Denmark, the Danish Expert Group on data ethics has adopted recommendations on "Data for the Benefit of the People". These recommendations are intended to encourage the responsible use of data in the business sector, with a focus on data processing. The recommendations include a focus on equality and non-discrimination with regard to bias in AI, as well as human dignity which is to outweigh profit and must be respected in all data processes.

The European Union has implemented the General Data Protection Regulation (GDPR) to protect citizens' personal data, which applies to the use of AI in healthcare. In addition, the European Commission has established guidelines to ensure the ethical development of AI, including the use of algorithms to ensure fairness and transparency. With GDPR, the European Union was the first to regulate AI through data protection legislation. The Union finds privacy as a fundamental human right, it wants to prevent unconsented and secondary uses of data by private or public health facilities. By streamlining access to personal data for health research and findings, they are able to instate the right and importance of patient privacy.

In March 2024, the European Union approved the pivotal Artificial Intelligence Act (AI Act). The regulation applies to European companies and organizations, and foreign providers of AI systems in the EU market. The EU AI Act has a risk-based structure, where AI enabled medical devices are in the "high-risk" category, the highest risk category of permitted uses for AI.

In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires organizations to protect the privacy and security of patient information. The Centers for Medicare and Medicaid Services have also released guidelines for the development of AI-based medical applications.

In 2025, Europe was leading the USA on AI regulation, while lagging in innovation and at least one California-based biotech company was "engaging the European Medicines Agency earlier in development than previously anticipated to mitigate concerns about the FDA's ability to meet development timelines."

Ethical concerns

While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues. AI may also compromise the protection of patients' rights, such as the right to informed consent and the right to medical data protection.

Privacy and data collection

In order to effectively train machine learning systems and use them in healthcare, massive amounts of data must be gathered. Acquiring this data, however, comes at the cost of patient privacy, which can be controversial. For example, a survey conducted in the UK estimated that 63% of the population is uncomfortable with sharing their personal data in order to improve AI technology. The scarcity of real, accessible patient data is a hindrance that deters the progress of developing and deploying more AI in healthcare.

The lack of regulations surrounding AI in the United States has generated concerns about mismanagement of patient data, such as with corporations utilizing patient data for financial gain. For example, as of 2020, the Swiss healthcare company Roche reportedly purchased healthcare data for approximately 2 million cancer patients at an estimated total cost of $1.9 billion. This generated ethical concerns about whether it was fair to sell patients' data, even considering the benefits. Ultimately, the current potential of AI in healthcare is additionally hindered by concerns about mismanagement of data collected, especially in the United States.

The use of large language models for healthcare consultations introduces particular privacy risks, such as increased exposure of sensitive health information during consultations that may be collected for model retraining. A 2024 study of 846 Chinese users found that while 77.3% expressed willingness to use LLM-based healthcare services, privacy awareness varied significantly by demographics and cultural context. The research revealed a "privacy paradox" where users who claimed greater privacy knowledge and concern actually showed higher acceptance of information sharing, potentially due to better understanding of legitimate uses such as academic research and service improvement.

Privacy expectations for LLMs vary significantly across cultural contexts. Research in China has shown that users may have different privacy norms compared to Western populations, with factors such as age, education level, and medical background influencing acceptance of data sharing. Younger and more educated users tend to be more privacy-conscious, while those with medical backgrounds show greater acceptance of health data sharing for legitimate medical purposes.

Technological unemployment

A systematic review and thematic analysis in 2023 showed that most stakeholders including health professionals, patients, and the general public doubted that care involving AI could be empathetic, or fulfill beneficence.

According to a 2019 study, AI can replace up to 35% of jobs in the UK within the next 10 to 20 years. However, of these jobs, it was concluded that AI has not eliminated any healthcare jobs so far. Though if AI were to automate healthcare-related jobs, the jobs most susceptible to automation would be those dealing with digital information, radiology, and pathology, as opposed to those dealing with doctor-to-patient interaction.

Outputs can be incorrect or incomplete and diagnosis and recommendations harm people.

Bias and discrimination

Since AI makes decisions solely on the data it receives as input, it is important that this data represents accurate patient demographics. In a hospital setting, patients do not have full knowledge of how predictive algorithms are created or calibrated. Therefore, these medical establishments can unfairly code their algorithms to discriminate against minorities and prioritize profits rather than providing optimal care, i.e. violating the ethical principle of social justice or non-maleficence. A recent scoping review identified 18 equity challenges along with 15 strategies that can be implemented to help address them when AI applications are developed using many-to-many mapping.

There can be unintended bias in algorithms that can exacerbate social and healthcare inequities. Since AI's decisions are a direct reflection of its input data, the data it receives must have accurate representation of patient demographics. For instance, if populations are less represented in healthcare data it is likely to create bias in AI tools that lead to incorrect assumptions of a demographic and impact the ability to provide appropriate care. White males are overly represented in medical data sets. Therefore, having minimal patient data on minorities can lead to AI making more accurate predictions for majority populations, leading to unintended worse medical outcomes for minority populations. Collecting data from minority communities can also lead to medical discrimination. For instance, HIV is a prevalent virus among minority communities and HIV status can be used to discriminate against patients. In addition to biases that may arise from sample selection, different clinical systems used to collect data may also impact AI functionality. For example, radiographic systems and their outcomes (e.g., resolution) vary by provider. Moreover, clinician work practices, such as the positioning of the patient for radiography, can also greatly influence the data and make comparability difficult. However, these biases are able to be eliminated through careful implementation and a methodical collection of representative data.

A final source of algorithmic bias, which has been called "label choice bias", arises when proxy measures are used to train algorithms, that build in bias against certain groups. For example, a widely used algorithm predicted health care costs as a proxy for health care needs, and used predictions to allocate resources to help patients with complex health needs. This introduced bias because Black patients have lower costs, even when they are just as unhealthy as White patients. Solutions to the "label choice bias" aim to match the actual target (what the algorithm is predicting) more closely to the ideal target (what researchers want the algorithm to predict), so for the prior example, instead of predicting cost, researchers would focus on the variable of healthcare needs which is rather more significant. Adjusting the target led to almost double the number of Black patients being selected for the program.

History

Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine. MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however.

The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks, have been applied to intelligent computing systems in healthcare.

Medical and technological advancements occurring over this half-century period that have enabled the growth of healthcare-related applications of AI to include:

Evolution of human intelligence

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Evolution_of_human_intelligence   ...