Search This Blog

Thursday, September 24, 2020

Ethics of artificial intelligence

From Wikipedia, the free encyclopedia
 

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent entities. It can be divided into a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). It also includes the issues of singularity and superintelligence.

Robot ethics

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.

Robot rights

"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.

Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:

61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.

In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.

The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.

Threat to human dignity

Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:

  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A therapist (as was proposed by Kenneth Colby in the 1970s)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.

Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

Transparency, accountability, and open source

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development. OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.

Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardisation effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. An updated collection (list) of AI Ethics is maintained by AlgorithmWatch. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its “Policy and investment recommendations for trustworthy Artificial Intelligence”. This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principle subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity, and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.

Biases in AI systems

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender. These AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. Similarly, Amazon's.com Inc's termination of AI hiring and recruitment is another example which exhibits that AI cannot be fair. The algorithm preferred more male candidates than female. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.

Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In a highly influential branch of AI known as "natural language processing," problems can arise from the "text corpus"—the source material the algorithm uses to learn about the relationships between different words.

Large companies such as IBM, Google, etc. started researching and addressing bias. One solution for addressing bias is to create documentation for the data used to train AI systems.

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.

Liability for self-driving cars

The wide use of partial to fully autonomous cars seems to be imminent in the future. But fully autonomous technologies present new issues and challenges. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accidents. In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers.

In one case that took place on March 19, 2018 a self-driving Uber Car kills pedestrian in Arizona-Death of Elaine Herzberg which alternatively leads to the death of that jaywalking pedestrian. Without further investigation on how the pedestrian got injury/death in such a case. It is important for people to reconsider the liability not only for those partial or fully automated cars, but those stakeholders who should be responsible for such a situation as well. In this case, the automated cars have the function of detecting nearby possible cars and objects in order to run the function of self-driven, but it did not have the ability to react to nearby pedestrian within its original function due to the fact that there will not be people appear on the road in a normal sense. This leads to the issue of whether the driver, pedestrian, the car company, or the government should be responsible in such a case.

According to this article, with the current partial or fully automated cars' function are still amateur which still require driver to pay attention with fully control the vehicle since all these functions/feature are just supporting driver to be less tried while they driving, but not let go. Thus, the government should be most responsible for current situation on how they should regulate the car company and driver who are over-rely on self-driven feature as well educated them that these are just technologies that bring convenience to people life but not a short-cut. Before autonomous cars become widely used, these issues need to be tackled through new policies.

Weaponization of artificial intelligence

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. On October 31, 2019, the Unites States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.

There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".

Machine ethics

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller.

Singularity

Many researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. AI researchers such as Stuart J. Russell and Bill Hibbard have proposed design strategies for developing beneficial machines.

AI ethics organisations

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning." Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.

A number of organizations pursue a technical theory of AI goal-system alignment with human values. Among these are the Machine Intelligence Research Institute, the Future of Humanity Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.

In fiction

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.

Literature

The standard bibliography is on PhilPapers on ethics of AI and robot ethics.

"Ethics of Artificial Intelligence and Robotics" (April 2020) in the Stanford Encyclopedia of Philosophy is a comprehensive exposition of the academic debates.

 

Egoism

From Wikipedia, the free encyclopedia
 

Egoism is the philosophy concerned with the role of the self, or ego, as the motivation and goal of one's own action. Different theories on egoism encompass a range of disparate ideas and can be categorized into descriptive or normative forms. That is, they may be interested in either describing that people do act in self-interest or prescribing that they should.

The New Catholic Encyclopedia states of egoism that it "incorporates in itself certain basic truths: it is natural for man to love himself; he should moreover do so, since each one is ultimately responsible for himself; pleasure, the development of one's potentialities, and the acquisition of power are normally desirable." The moral censure of self-interest is a common subject of critique in egoist philosophy, with such judgments being examined as means of control and the result of power relations. Egoism may also reject that insight into one's internal motivation can arrive extrinsically, such as from psychology or sociology, though, for example, this is not present in the philosophy of Friedrich Nietzsche.

Etymology

The term egoism is derived from the French égoïsme, from the Latin ego (first person singular personal pronoun; "I") with the French -ïsme ("-ism"). As such, the term shares early etymology with egotism.

Descriptive theories

The descriptive variants of egoism are concerned with self-interest as a factual description of human motivation and, in its furthest application, that all human motivation stems from the desires and interest of the ego. In these theories, action which is self-interested may be simply termed egoistic.

The view that people tend to act in their own self-interest is called default egoism, whereas psychological egoism is the doctrine that holds that all motivations are rooted solely in psychological self-interest. That is, in its strong form, that even seemingly altruistic actions are only disguised as such and are always self-serving. Its weaker form instead holds that, even if altruistic motivation is possible, the willed action necessarily becomes egoistic in serving the ego's will. In contrast to this and philosophical egoism, biological egoism (also called evolutionary egoism) describes motivations rooted solely in reproductive self-interest (i.e. reproductive fitness). Furthermore, selfish gene theory holds that it is the self-interest of genetic information that conditions human behaviour.

In moral psychology

The word "good" is from the start in no way necessarily tied up with "unegoistic" actions, as it is in the superstition of those genealogists of morality. Rather, that occurs for the first time with the collapse of aristocratic value judgments, when this entire contrast between "egoistic" and "unegoistic" pressed itself ever more strongly into human awareness—it is, to use my own words, the instinct of the herd which, through this contrast, finally gets its word (and its words). — Friedrich Nietzsche, On the Genealogy of Morals

In his On the Genealogy of Morals, Friedrich Nietzsche traces the origins of master–slave morality to fundamentally egoistic value judgments. In the aristocratic valuation, excellence and virtue come as a form of superiority over the common masses, which the priestly valuation, in ressentiment of power, seeks to invert—where the powerless and pitiable become the moral ideal. This upholding of unegoistic actions is therefore seen as stemming from a desire to reject the superiority or excellency of others. He holds that all normative systems which operate in the role often associated with morality favor the interests of some people, often, though not necessarily, at the expense of others.

Normative theories

Theories which hold egoism to be normative stipulate that the ego ought to promote its own interests above other values. Where this ought is held to be a pragmatic judgment it is termed rational egoism and where it is held to be a moral judgment it is termed ethical egoism. The Stanford Encyclopedia of Philosophy states that "ethical egoism might also apply to things other than acts, such as rules or character traits" but that such variants are uncommon. Furthermore, conditional egoism is a consequentialist form of ethical egoism which holds that egoism is morally right if it leads to morally acceptable ends. John F. Welsh, in his work Max Stirner's Dialectical Egoism: A New Interpretation, coins the term dialectical egoism to describe an interpretation of the egoist philosophy of Max Stirner as being fundamentally dialectical.

Normative egoism, as in the case of Stirner, need not reject that some modes of behavior are to be valued above others—such as Stirner's affirmation that non-restriction and autonomy are to be most highly valued. Contrary theories, however, may just as easily favour egoistic domination of others.

Relations with altruism

In 1851, French philosopher Auguste Comte coined the term altruism (French: altruisme; from Italian altrui from Latin alteri, meaning 'others') as an antonym for egoism. The term entered English in 1853 and was popularized by advocates of Comte's moral philosophy—principally, that self-regard must be replaced with only the regard for others.

Comte argues that only two human motivations exist, egoistic and altruistic, and that the two cannot be mediated. That is, one must always predominate the other. For Comte, the total subordination of the self to altruism is a necessary condition to social and individual benefit. Friedrich Nietzsche, rather than rejecting the practice of altruism, warns that despite there being neither much altruism nor equality in the world, there is almost universal endorsement of their value and, notoriously, even by those who are its worst enemies in practice. Egoism commonly views the subordination of the self to altruism as either a form of domination that limits freedom, an unethical or irrational principle, or an extension of some egoistic root cause.

In evolutionary theory, biological altruism is the observed occurrence of an organism acting to the benefit of others at the cost of its own reproductive fitness. While biological egoism does grant that an organism may act to the benefit of others, it describes only such when in accordance with reproductive self-interest. Kin altruism and selfish gene theory are examples of this division. On biological altruism, the Stanford Encyclopedia of Philosophy states the following:

Contrary to what is often thought, an evolutionary approach to human behaviour does not imply that humans are likely to be motivated by self-interest alone. One strategy by which ‘selfish genes’ may increase their future representation is by causing humans to be non-selfish, in the psychological sense.

This is a central topic within contemporary discourse of psychological egoism.

Relations with nihilism

Max Stirner's rejection of absolutes and abstract concepts often places him among the first philosophical nihilists. Furthermore, his philosophy has been disputedly recognised as among the precursors to nihilism, existentialism, poststructuralism and postmodernism. The Stanford Encyclopedia of Philosophy, states:

Characterisations of Stirner as a "nihilist"—in the sense that he rejects all normative judgement—would also appear to be mistaken.

... Stirner is clearly committed to the non-nihilistic view that certain kinds of character and modes of behaviour (namely autonomous individuals and actions) are to be valued above all others. His conception of morality is, in this respect, a narrow one, and his rejection of the legitimacy of moral claims is not to be confused with a denial of the propriety of all normative or ethical judgement.

Russian philosophers Dmitry Pisarev and Nikolay Chernyshevsky, both advocates of rational egoism, were also major proponents of Russian nihilism.

Egoism and Nietzsche

I submit that egoism belongs to the essence of a noble soul, I mean the unalterable belief that to a being such as "we," other beings must naturally be in subjection, and have to sacrifice themselves. The noble soul accepts the fact of his egoism without question, and also without consciousness of harshness, constraint, or arbitrariness therein, but rather as something that may have its basis in the primary law of things:—if he sought a designation for it he would say: "It is justice itself." — Friedrich Nietzsche, Beyond Good and Evil

The terms nihilism and anti-nihilism have both been used to categorise the philosophy of Friedrich Nietzsche. His thought has similarly been linked to forms of both descriptive and normative egoism.

Nietzsche, in attacking the widely held moral abhorrence for egoistic action, seeks to free higher human beings from their belief that this morality is good for them. He rejects Christian and Kantian ethics as merely the disguised egoism of slave morality.

Egoism and postmodernity

Max Stirner's philosophy strongly rejects modernity and is highly critical of the increasing dogmatism and oppressive social institutions that embody it. In order that it might be surpassed, egoist principles are upheld as a necessary advancement beyond the modern world. The Stanford Encyclopedia states that Stirner's historical analyses serve to "undermine historical narratives which portray the modern development of humankind as the progressive realisation of freedom, but also to support an account of individuals in the modern world as increasingly oppressed". This critique of humanist discourses especially has linked Stirner to more contemporary poststructuralist thought.

Relations with political theory

Since normative egoism rejects the moral obligation to subordinate the ego to a ruling class, it is predisposed to certain political implications. The Internet Encyclopedia of Philosophy states:

Egoists ironically can be read as moral and political egalitarians glorifying the dignity of each and every person to pursue life as they see fit. Mistakes in securing the proper means and appropriate ends will be made by individuals, but if they are morally responsible for their actions they not only will bear the consequences but also the opportunity for adapting and learning.

In contrast with this however, such an ethic may not morally obligate against the egoistic exercise of power over others. On these grounds, Friedrich Nietzsche criticizes egalitarian morality and political projects as unconducive to the development of human excellence. Max Stirner's own conception, the union of egoists as detailed in his work The Ego and Its Own, saw a proposed form of societal relations whereby limitations on egoistic action are rejected. When posthumously adopted by the anarchist movement, this became the foundation for egoist anarchism.

Stirner's variant of property theory is similarly dialectical, where the concept of ownership is only that personal distinction made between what is one's property and what is not. Consequentially, it is the exercise of control over property which constitutes the nonabstract possession of it. In contrast to this, Ayn Rand incorporates capitalist property rights into her egoist theory.

Egoism and revolutionary politics

Egoist philosopher Nikolai Gavrilovich Chernyshevskii was the dominant intellectual figure behind the 1860–1917 revolutionary movement in Russia, which resulted in the assassination of Tsar Alexander II eight years before his death in 1889. Dmitry Pisarev was a similarly radical influence within the movement, though he did not personally advocate political revolution.

Philosophical egoism has also found wide appeal among anarchist revolutionaries and thinkers, such as John Henry Mackay, Benjamin Tucker, Emile Armand, Han Ryner Gérard de Lacaze-Duthiers, Renzo Novatore, Miguel Giménez Igualada, and Lev Chernyi. Though he did not involve in any revolutionary movements himself, the entire school of individualist anarchism owes much of its intellectual heritage to Max Stirner.

Egoist philosophy may be misrepresented as a principally revolutionary field of thought. However, neither Hobbesian nor Nietzshean theories of egoism approve of political revolution. Anarchism and revolutionary socialism were also strongly rejected by Ayn Rand and her followers.

Egoism and fascism

The philosophies of both Nietzsche and Stirner were heavily appropriated by fascist and proto-fascist ideologies. Nietzsche in particular has infamously been misrepresented as a predecessor to Nazism and a substantial academic effort was necessary to disassociate his ideas from their aforementioned appropriation.

At first sight, Nazi totalitarianism may seem the opposite of Stirner's radical individualism. But fascism was above all an attempt to dissolve the social ties created by history and replace them by artificial bonds among individuals who were expected to render explicit obedience to the state on grounds of absolute egoism. Fascist education combined the tenets of asocial egoism and unquestioning conformism, the latter being the means by which the individual secured his own niche in the system. Stirner's philosophy has nothing to say against conformism, it only objects to the Ego being subordinated to any higher principle: the egoist is free to adjust to the world if it is clear he will better himself by doing so. His 'rebellion' may take the form of utter servility if it will further his interest; what he must not do is to be bound by 'general' values or myths of humanity. The totalitarian ideal of a barrack-like society from which all real, historical ties have been eliminated is perfectly consistent with Stirner's principles: the egoist, by his very nature, must be prepared to fight under any flag that suits his convenience.

Despite this, the influence of Stirner's philosophy has been primarily anti-authoritarian. More contemporarily, Rand's thought has lent influence to the alt-right movement.

List of philosophers of egoism

 

Self-love

From Wikipedia, the free encyclopedia

Self-love defined as "love of self" or "regard for one's own happiness or advantage" has both been conceptualized as a basic human necessity and as a moral flaw, akin to vanity and selfishness, synonymous with amour propre, conceit, conceitedness, egotism, et al. However, throughout the centuries self-love has adopted a more positive connotation through pride parades, Self Respect Movement, self-love protests, the hippie era, the New Age feminist movement as well as the increase in mental health awareness that promotes self-love as intrinsic to self-help and support groups working to prevent substance abuse and suicide.

Views

Laozi (c. 601–530 BC) and Taoism believes that people being completely natural (ziran) is very important.

The Hindu arishadvargas (major sins) are short-term self-benefiting pursuits that are ultimately damaging. These include mada (pride).

Gautama Buddha (c. 563-483) and Buddhism believe that the desires of the self are the root of all evil. However, this is balanced with karuṇā (compassion).

Jainism believes that the four kashaya (passions) stop people escaping the cycle of life and death.

Confucius (551–479 BC) and Confucianism values society over the self.

Yang Zhu (440–360 BC) and Yangism viewed wei wo, or "everything for myself", as the only virtue necessary for self-cultivation. All of what is known of Yangism comes from its contemporary critics - Yang's beliefs were hotly contested.

The thoughts of Aristotle (384–322 BC) about self-love (philautia) are recorded in the Nicomachean Ethics and Eudemian Ethics. Nicomachean Ethics Book 9, Chapter 8 focuses on it particularly. In this passage, Aristotle argues that people who love themselves to achieve unwarranted personal gain are bad, but those who love themselves to achieve virtuous principles are the best sort of good. He says the former kind of self-love is much more common than the latter.

Cicero (106–43 BC) considered those who were sui amantes sine rivali (lovers of themselves without rivals) were doomed to end in failure.

Jesus (c. 4 BC-30 AD) prioritised the loving of God, and commanded the love other people as one self. Early follower of Jesus Paul of Tarsus wrote that disordinate self-love was opposed to love of God in his letter to the Phillipian church. The author of the New Testament letter of James had the same belief. There is another verse in the Bible that does talk about the importance of self-love found in Mark 12:31 that states, "The second is this: 'Love your neighbor as yourself.'

However Elaine Pagels, based on scholarship of the Nag Hammadi library and the Greek New Testament, argues that Jesus taught that self-love (philautia) was intrinsic to neighborly, or brotherly-love (philia) and to live according to the law of the love of the most high (agapē). She wrote of this in her eponymously-titled award-winning book on The Gnostic Gospels in 1979. She and later scholars such as Étienne Balibar and Thomas Kiefer have compared this to Aristotle’s discourse on proportion of self-love (philautia) as intrinsic to philia (in Nicomachean Ethics Book 9, Chapter 8).

Christian monk Evagrius Ponticus (345–399) believed excessive self-love (hyperēphania – pride) was one of eight key sins. His list of sins was later lightly adapted by Pope Gregory I as the "seven deadly sins". This list of sins then became an important part of the doctrine of the western church. Under this system, pride is the original and most deadly of the sins. This position was expressed strongly in fiction by Dante's The Divine Comedy.

Augustine (354–430) – with his theology of evil as a mere distortion of the good – considered that the sin of pride was only a perversion of a normal, more modest degree of self-love.

The Sikhs believe that the Five Thieves are the core human weaknesses that steal the innately good common sense from people. These selfish desires cause great problems.

In 1612 Francis Bacon condemned extreme self-lovers, who would burn down their own home, only to roast themselves an egg.

In the 1660s Baruch Spinoza wrote in his book Ethics that self-preservation was the highest virtue.

Jean-Jacques Rousseau (1712–1778) believed there were two kinds of self-love. One was "amour de soi" (French for "love of self") which is the drive for self-preservation. Rousseau considered this drive to be the root of all human drives. The other was "amour-propre" (often also translated as "self-love", but which also means "pride"), which refers to the self-esteem generated from being appreciated by other people.

The concept of "ethical egoism" was introduced by the philosopher Henry Sidgwick in his book The Methods of Ethics, written in 1874. Sidgwick compared egoism to the philosophy of utilitarianism, writing that whereas utilitarianism sought to maximize overall pleasure, egoism focused only on maximizing individual pleasure.

In 1890, psychologist William James examined the concept of self esteem in his influential textbook Principles of Psychology. Robert H. Wozniak later wrote that William James's theory of self-love in this book was measured in "... three different but interrelated aspects of self: the material self (all those aspects of material existence in which we feel a strong sense of ownership, our bodies, our families, our possessions), the social self (our felt social relations), and the spiritual self (our feelings of our own subjectivity)".

In 1956 psychologist and social philosopher Erich Fromm proposed that loving oneself is different from being arrogant, conceited or egocentric, meaning that instead caring about oneself and taking responsibility for oneself. Fromm proposed a re-evaluation of self-love in more positive sense, arguing that in order to be able to truly love another person, a person first needs to love oneself in the way of respecting oneself and knowing oneself (e.g. being realistic and honest about one's strengths and weaknesses).

In the 1960s, Erik H. Erikson similarly wrote of a post-narcissistic appreciation of the value of the ego, while Carl Rogers saw one result of successful therapy as the regaining of a quiet sense of pleasure in being one's own self.

Self-love or self-worth was defined in 2003 by Aiden Gregg and Constantine Sedikides as "referring to a person's subjective appraisal of himself or herself as intrinsically positive or negative".

Mental health

The role of self-love in mental health was first described by William Sweetser (1797–1875) as the maintenance of "mental hygiene". His analysis, demonstrated in his essay "Temperance Society" published August 26, 1830, claimed that regular maintenance of mental hygiene created a positive impact on the well-being of individuals and the community as well.

Lack of self-love increases risk of suicide according to the American Association of Suicidology, The association conducted a study in 2008 which researched the impact of low self-esteem and lack of self-love and its relation to suicidal tendencies and attempts. They defined self-love as being "beliefs about oneself (self-based self-esteem) and beliefs about how other people regard oneself (other-based self-esteem)". It concluded that "depression, hopelessness, and low self-esteem are implications of vulnerability factors for suicide ideation" and that "these findings suggest that even in the context of depression and hopelessness, low self-esteem may add to the risk for suicide ideation".

Promotion

History

Self-love was first promoted by the Beat Generation of the 1950s and in the early years of the Hippie era of the 1960s. After witnessing the devastating consequences of World War II and having troops still fighting in the Vietnam War, western (especially North American) societies began promoting "peace and love" to help generate positive energy and to promote the preservation of dissipating environmental factors, such as the emergence of oil pipelines and the recognition of pollution caused by the greenhouse effect.

These deteriorating living conditions caused worldwide protests that primarily focused on ending the war, but secondarily promoted a positive environment aided by the fundamental concept of crowd psychology. This post-war community was left very vulnerable to persuasion but began encouraging freedom, harmony and the possibility of a brighter, non-violent future. These protests took place on almost all continents and included countries such as the United States (primarily New York City and California), England and Australia. Their dedication, perseverance and empathy towards human life defined this generation as being peace advocates and carefree souls.

The emergence of the feminist movement began as early as the 19th century, but only began having major influence during the second wave movement, which included women's rights protests that inevitably led to women gaining the right to vote. These protests not only promoted equality but also suggested that women should recognize their self-worth through the knowledge and acceptance of self-love. Elizabeth Cady Stanton used the Declaration of Independence as a guideline to demonstrate that women have been harshly treated throughout the centuries in her feminist essay titled "Declaration of Sentiments". In the essay she claims that "all men and women are created equal; ... that among these [rights] are life, liberty, and the pursuit of happiness"; and that without these rights, the capacity to feel self-worth and self-love is scarce. This historical essay suggests that a lack of self-esteem and fear of self-love affects modern women due to lingering post-industrial gender conditions.

Self-love has also been used as a tool in communities of Color in the United States. In the 1970s Black-Power movement, the slogan "Black is beautiful!" became a way for African-Americans to throw off the mantle of predominately White beauty norms. The dominant cultural aesthetic pre-1970s was to straighten Black hair with a perm or hot comb. During the Black Power movement, the "afro" or "fro" became the popular hairstyle. It involved letting Black Hair grow naturally, without chemical treatment, so as to embrace and flaunt the extremely curly hair texture of Black people. Hair was teased out the hair using a pick. The goal was to cause the hair to form a halo around the head, flaunting the Blackness of its wearer. This form of self-love and empowerment during the 70s was a way for African-Americans to combat the stigma against their natural hair texture, which was, and still is, largely seen as unprofessional in the modern workplace.

Modern platforms

The emergence of social media has created a platform for self-love promotion and mental health awareness in order to end the stigma surrounding mental health and to address self-love positively rather than negatively.

A few modern examples of self-love promotion platforms include:

Literary references

Beck, Bhar, Brown & Ghahramanlou‐Holloway (2008). "Self-Esteem and Suicide Ideation in Psychiatric Outpatients". Suicide and Life Threatening Behavior 38.

Malvolio is described as "sick of self-love...a distempered appetite" in Twelfth Night (I.v.85-6), lacking self-perspective.

Self-love or self-worth was later defined by A.P. Gregg and C. Sedikides in 2003.

Origins of Self-love by Willy Zayas in 2019.

 

Bundle theory

From Wikipedia, the free encyclopedia

Bundle theory, originated by the 18th century Scottish philosopher David Hume, is the ontological theory about objecthood in which an object consists only of a collection (bundle) of properties, relations or tropes.

According to bundle theory, an object consists of its properties and nothing more; thus, there cannot be an object without properties and one cannot conceive of such an object. For example, when we think of an apple, we think of its properties: redness, roundness, being a type of fruit, etc. There is nothing above and beyond these properties; the apple is nothing more than the collection of its properties. In particular, there is no substance in which the properties are inherent.

Arguments for

The difficulty in conceiving of or describing an object without also conceiving of or describing its properties is a common justification for bundle theory, especially among current philosophers in the Anglo-American tradition.

The inability to comprehend any aspect of the thing other than its properties implies, this argument maintains, that one cannot conceive of a bare particular (a substance without properties), an implication that directly opposes substance theory. The conceptual difficulty of bare particulars was illustrated by John Locke when he described a substance by itself, apart from its properties, as "something, I know not what. [...] The idea then we have, to which we give the general name substance, being nothing but the supposed, but unknown, support of those qualities we find existing, which we imagine cannot subsist sine re substante, without something to support them, we call that support substantia; which, according to the true import of the word, is, in plain English, standing under or upholding."

Whether a relation of an object is one of its properties may complicate such an argument. However, the argument concludes that the conceptual challenge of bare particulars leaves a bundle of properties and nothing more as the only possible conception of an object, thus justifying bundle theory.

Objections

Bundle theory maintains that properties are bundled together in a collection without describing how they are tied together. For example, bundle theory regards an apple as red, four inches (100 mm) wide, and juicy but lacking an underlying substance. The apple is said to be a bundle of properties including redness, being four inches (100 mm) wide, and juiciness. D. Hume used the term "bundle" in this sense, also referring to the personal identity, in his main work: "I may venture to affirm of the rest of mankind, that they are nothing but a bundle or collection of different perceptions, which succeed each other with inconceivable rapidity, and are in a perpetual flux and movement".

Critics question how bundle theory accounts for the properties' compresence (the togetherness relation between those properties) without an underlying substance. Critics also question how any two given properties are determined to be properties of the same object if there is no substance in which they both inhere.

Traditional bundle theory explains the compresence of properties by defining an object as a collection of properties bound together. Thus, different combinations of properties and relations produce different objects. Redness and juiciness, for example, may be found together on top of the table because they are part of a bundle of properties located on the table, one of which is the "looks like an apple" property.

By contrast, substance theory explains the compresence of properties by asserting that the properties are found together because it is the substance that has those properties. In substance theory, a substance is the thing in which properties inhere. For example, redness and juiciness are found on top of the table because redness and juiciness inhere in an apple, making the apple red and juicy.

The bundle theory of substance explains compresence. Specifically, it maintains that properties' compresence itself engenders a substance. Thus, it determines substancehood empirically by the togetherness of properties rather than by a bare particular or by any other non-empirical underlying strata. The bundle theory of substance thus rejects the substance theories of Aristotle, Descartes, Leibniz, and more recently, J. P. Moreland, Jia Hou, Joseph Bridgman, Quentin Smith, and others.

Buddhism

The Buddhist Madhyamaka philosopher, Chandrakirti, used the aggregate nature of objects to demonstrate the lack of essence in what is known as the sevenfold reasoning. In his work, Guide to the Middle Way (Sanskrit: Madhyamakāvatāra), he says:

[The self] is like a cart, which is not other than its parts, not non-other, and does not possess them. It is not within its parts, and its parts are not within it. It is not the mere collection, and it is not the shape.

He goes on to explain what is meant by each of these seven assertions, but briefly in a subsequent commentary he explains that the conventions of the world do not exist essentially when closely analyzed, but exist only through being taken for granted, without being subject to scrutiny that searches for an essence within them.

Another view of the Buddhist theory of the self, especially in early Buddhism, is that the Buddhist theory is essentially an eliminativist theory. According to this understanding, the self can not be reduced to a bundle because there is nothing that answers to the concept of a self. Consequently, the idea of a self must be eliminated.

Cooperative

From Wikipedia, the free encyclopedia ...