Search This Blog

Tuesday, February 15, 2022

Enactivism

From Wikipedia, the free encyclopedia

Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination" (p. 198).  "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate.

The term 'enactivism' is close in meaning to 'enaction', defined as "the manner in which a subject of perception creatively matches its actions to the requirements of its situation". The introduction of the term enaction in this context is attributed to Francisco Varela, Evan Thompson, and Eleanor Rosch in The Embodied Mind (1991), who proposed the name to "emphasize the growing conviction that cognition is not the representation of a pre-given world by a pre-given mind but is rather the enactment of a world and a mind on the basis of a history of the variety of actions that a being in the world performs". This was further developed by Thompson and others, to place emphasis upon the idea that experience of the world is a result of mutual interaction between the sensorimotor capacities of the organism and its environment. However, some writers maintain that there remains a need for some degree of the mediating function of representation in this new approach to the science of the mind.

The initial emphasis of enactivism upon sensorimotor skills has been criticized as "cognitively marginal", but it has been extended to apply to higher level cognitive activities, such as social interactions. "In the enactive view,... knowledge is constructed: it is constructed by an agent through its sensorimotor interactions with its environment, co-constructed between and within living species through their meaningful interaction with each other. In its most abstract form, knowledge is co-constructed between human individuals in socio-linguistic interactions...Science is a particular form of social knowledge construction...[that] allows us to perceive and predict events beyond our immediate cognitive grasp...and also to construct further, even more powerful scientific knowledge."

Enactivism is closely related to situated cognition and embodied cognition, and is presented as an alternative to cognitivism, computationalism, and Cartesian dualism.

Philosophical aspects

Enactivism is one of a cluster of related theories sometimes known as the 4Es. As described by Mark Rowlands, mental processes are:

  • Embodied involving more than the brain, including a more general involvement of bodily structures and processes.
  • Embedded functioning only in a related external environment.
  • Enacted involving not only neural processes, but also things an organism does.
  • Extended into the organism's environment.

Enactivism proposes an alternative to dualism as a philosophy of mind, in that it emphasises the interactions between mind, body and the environment, seeing them all as inseparably intertwined in mental processes. The self arises as part of the process of an embodied entity interacting with the environment in precise ways determined by its physiology. In this sense, individuals can be seen to "grow into" or arise from their interactive role with the world.

"Enaction is the idea that organisms create their own experience through their actions. Organisms are not passive receivers of input from the environment, but are actors in the environment such that what they experience is shaped by how they act."

In The Tree of Knowledge Maturana & Varela proposed the term enactive "to evoke the view of knowledge that what is known is brought forth, in contraposition to the more classical views of either cognitivism or connectionism. They see enactivism as providing a middle ground between the two extremes of representationalism and solipsism. They seek to "confront the problem of understanding how our existence-the praxis of our living- is coupled to a surrounding world which appears filled with regularities that are at every instant the result of our biological and social histories.... to find a via media: to understand the regularity of the world we are experiencing at every moment, but without any point of reference independent of ourselves that would give certainty to our descriptions and cognitive assertions. Indeed the whole mechanism of generating ourselves, as describers and observers tells us that our world, as the world which we bring forth in our coexistence with others, will always have precisely that mixture of regularity and mutability, that combination of solidity and shifting sand, so typical of human experience when we look at it up close."[Tree of Knowledge, p. 241] Another important notion relating to enactivism is autopoiesis. The word refers to a system that is able to reproduce and maintain itself. Maturana & Varela describe that "This was a word without a history, a word that could directly mean what takes place in the dynamics of the autonomy proper to living systems" Using the term autopoiesis, they argue that any closed system that has autonomy, self-reference and self-construction (or, that has autopoietic activities) has cognitive capacities. Therefore, cognition is present in all living systems. This view is also called autopoietic enactivism.

Radical enactivism is another form of enactivist view of cognition. Radical enactivists often adopt a thoroughly non-representational, enactive account of basic cognition. Basic cognitive capacities mentioned by Hutto and Myin include perceiving, imagining and remembering. They argue that those forms of basic cognition can be explained without positing mental representations. With regard to complex forms of cognition such as language, they think mental representations are needed, because there needs explanations of content. In human being's public practices, they claim that "such intersubjective practices and sensitivity to the relevant norms comes with the mastery of the use of public symbol systems" (2017, p. 120), and so "as it happens, this appears only to have occurred in full form with construction of sociocultural cognitive niches in the human lineage" (2017, p. 134). They conclude that basic cognition as well as cognition in simple organisms such as bacteria are best characterized as non-representational.

Enactivism also addresses the hard problem of consciousness, referred to by Thompson as part of the explanatory gap in explaining how consciousness and subjective experience are related to brain and body. "The problem with the dualistic concepts of consciousness and life in standard formulations of the hard problem is that they exclude each other by construction". Instead, according to Thompson's view of enactivism, the study of consciousness or phenomenology as exemplified by Husserl and Merleau-Ponty is to complement science and its objectification of the world. "The whole universe of science is built upon the world as directly experienced, and if we want to subject science itself to rigorous scrutiny and arrive at a precise assessment of its meaning and scope, we must begin by reawakening the basic experience of the world of which science is the second-order expression" (Merleau-Ponty, The phenomenology of perception as quoted by Thompson, p. 165). In this interpretation, enactivism asserts that science is formed or enacted as part of humankind's interactivity with its world, and by embracing phenomenology "science itself is properly situated in relation to the rest of human life and is thereby secured on a sounder footing."

Enaction has been seen as a move to conjoin representationalism with phenomenalism, that is, as adopting a constructivist epistemology, an epistemology centered upon the active participation of the subject in constructing reality. However, 'constructivism' focuses upon more than a simple 'interactivity' that could be described as a minor adjustment to 'assimilate' reality or 'accommodate' to it. Constructivism looks upon interactivity as a radical, creative, revisionist process in which the knower constructs a personal 'knowledge system' based upon their experience and tested by its viability in practical encounters with their environment. Learning is a result of perceived anomalies that produce dissatisfaction with existing conceptions.

Shaun Gallagher also points out that pragmatism is a forerunner of enactive and extended approaches to cognition. According to him, enactive conceptions of cognition can be found in many pragmatists such as Charles Sanders Pierce and John Dewey. For example, Dewey says that "The brain is essentially an organ for effecting the reciprocal adjustment to each other of the stimuli received from the environment and responses directed upon it" (1916, pp. 336–337). This view is fully consistent with enactivist arguments that cognition is not just a matter of brain processes and brain is one part of the body consisting of the dynamical regulation. Robert Brandom, a neo-pragmatist, comments that "A founding idea of pragmatism is that the most fundamental kind of intentionality (in the sense of directedness towards objects) is the practical involvement with objects exhibited by a sentient creature dealing skillfully with its world" (2008, p. 178).

How does constructivism relate to enactivism? From the above remarks it can be seen that Glasersfeld expresses an interactivity between the knower and the known quite acceptable to an enactivist, but does not emphasize the structured probing of the environment by the knower that leads to the "perturbation relative to some expected result" that then leads to a new understanding. It is this probing activity, especially where it is not accidental but deliberate, that characterizes enaction, and invokes affect, that is, the motivation and planning that lead to doing and to fashioning the probing, both observing and modifying the environment, so that "perceptions and nature condition one another through generating one another." The questioning nature of this probing activity is not an emphasis of Piaget and Glasersfeld.

Sharing enactivism's stress upon both action and embodiment in the incorporation of knowledge, but giving Glasersfeld's mechanism of viability an evolutionary emphasis, is evolutionary epistemology. Inasmuch as an organism must reflect its environment well enough for the organism to be able to survive in it, and to be competitive enough to be able to reproduce at sustainable rate, the structure and reflexes of the organism itself embody knowledge of its environment. This biology-inspired theory of the growth of knowledge is closely tied to universal Darwinism, and is associated with evolutionary epistemologists such as Karl Popper, Donald T. Campbell, Peter Munz, and Gary Cziko. According to Munz, "an organism is an embodied theory about its environment... Embodied theories are also no longer expressed in language, but in anatomical structures or reflex responses, etc."

One objection to enactive approaches to cognition is the so-called "scale-up objection". According to this objection, enactive theories only have limited value because they cannot "scale up" to explain more complex cognitive capacities like human thoughts. Those phenomena are extremely difficult to explain without positing representation. But recently, some philosophers are trying to respond to such objection. For example, Adrian Downey (2020) provides a non-representational account of Obsessive-compulsive disorder, and then argues that ecological-enactive approaches can respond to the "scaling up" objection.

Psychological aspects

McGann & others argue that enactivism attempts to mediate between the explanatory role of the coupling between cognitive agent and environment and the traditional emphasis on brain mechanisms found in neuroscience and psychology. In the interactive approach to social cognition developed by De Jaegher & others, the dynamics of interactive processes are seen to play significant roles in coordinating interpersonal understanding, processes that in part include what they call participatory sense-making. Recent developments of enactivism in the area of social neuroscience involve the proposal of The Interactive Brain Hypothesis where social cognition brain mechanisms, even those used in non-interactive situations, are proposed to have interactive origins.

Enactive views of perception

In the enactive view, perception "is not conceived as the transmission of information but more as an exploration of the world by various means. Cognition is not tied into the workings of an 'inner mind', some cognitive core, but occurs in directed interaction between the body and the world it inhabits."

Alva Noë in advocating an enactive view of perception sought to resolve how we perceive three-dimensional objects, on the basis of two-dimensional input. He argues that we perceive this solidity (or 'volumetricity') by appealing to patterns of sensorimotor expectations. These arise from our agent-active 'movements and interaction' with objects, or 'object-active' changes in the object itself. The solidity is perceived through our expectations and skills in knowing how the object's appearance would change with changes in how we relate to it. He saw all perception as an active exploration of the world, rather than being a passive process, something which happens to us.

Noë's idea of the role of 'expectations' in three-dimensional perception has been opposed by several philosophers, notably by Andy Clark. Clark points to difficulties of the enactive approach. He points to internal processing of visual signals, for example, in the ventral and dorsal pathways, the two-streams hypothesis. This results in an integrated perception of objects (their recognition and location, respectively) yet this processing cannot be described as an action or actions. In a more general criticism, Clark suggests that perception is not a matter of expectations about sensorimotor mechanisms guiding perception. Rather, although the limitations of sensorimotor mechanisms constrain perception, this sensorimotor activity is drastically filtered to fit current needs and purposes of the organism, and it is these imposed 'expectations' that govern perception, filtering for the 'relevant' details of sensorimotor input (called "sensorimotor summarizing").

These sensorimotor-centered and purpose-centered views appear to agree on the general scheme but disagree on the dominance issue – is the dominant component peripheral or central. Another view, the closed-loop perception one, assigns equal a-priori dominance to the peripheral and central components. In closed-loop perception, perception emerges through the process of inclusion of an item in a motor-sensory-motor loop, i.e., a loop (or loops) connecting the peripheral and central components that are relevant to that item. The item can be a body part (in which case the loops are in steady-state) or an external object (in which case the loops are perturbed and gradually converge to a steady state). These enactive loops are always active, switching dominance by the need.

Another application of enaction to perception is analysis of the human hand. The many remarkably demanding uses of the hand are not learned by instruction, but through a history of engagements that lead to the acquisition of skills. According to one interpretation, it is suggested that "the hand [is]...an organ of cognition", not a faithful subordinate working under top-down instruction, but a partner in a "bi-directional interplay between manual and brain activity." According to Daniel Hutto: "Enactivists are concerned to defend the view that our most elementary ways of engaging with the world and others - including our basic forms of perception and perceptual experience - are mindful in the sense of being phenomenally charged and intentionally directed, despite being non-representational and content-free." Hutto calls this position 'REC' (Radical Enactive Cognition): "According to REC, there is no way to distinguish neural activity that is imagined to be genuinely content involving (and thus truly mental, truly cognitive) from other non-neural activity that merely plays a supporting or enabling role in making mind and cognition possible."

Participatory sense-making

Hanne De Jaegher and Ezequiel Di Paolo (2007) have extended the enactive concept of sense-making into the social domain. The idea takes as its departure point the process of interaction between individuals in a social encounter. De Jaegher and Di Paolo argue that the interaction process itself can take on a form of autonomy (operationally defined). This allows them to define social cognition as the generation of meaning and its transformation through interacting individuals.

The notion of participatory sense-making has led to the proposal that interaction processes can sometimes play constitutive roles in social cognition (De Jaegher, Di Paolo, Gallagher, 2010). It has been applied to research in social neuroscience and autism.

In a similar vein, "an inter-enactive approach to agency holds that the behavior of agents in a social situation unfolds not only according to their individual abilities and goals, but also according to the conditions and constraints imposed by the autonomous dynamics of the interaction process itself". According to Torrance, enactivism involves five interlocking themes related to the question "What is it to be a (cognizing, conscious) agent?" It is:

1. to be a biologically autonomous (autopoietic) organism
2. to generate significance or meaning, rather than to act via...updated internal representations of the external world
3. to engage in sense-making via dynamic coupling with the environment
4. to 'enact' or 'bring forth' a world of significances by mutual co-determination of the organism with its enacted world
5. to arrive at an experiential awareness via lived embodiment in the world.

Torrance adds that "many kinds of agency, in particular the agency of human beings, cannot be understood separately from understanding the nature of the interaction that occurs between agents." That view introduces the social applications of enactivism. "Social cognition is regarded as the result of a special form of action, namely social interaction...the enactive approach looks at the circular dynamic within a dyad of embodied agents."

In cultural psychology, enactivism is seen as a way to uncover cultural influences upon feeling, thinking and acting. Baerveldt and Verheggen argue that "It appears that seemingly natural experience is thoroughly intertwined with sociocultural realities." They suggest that the social patterning of experience is to be understood through enactivism, "the idea that the reality we have in common, and in which we find ourselves, is neither a world that exists independently from us, nor a socially shared way of representing such a pregiven world, but a world itself brought forth by our ways of communicating and our joint action....The world we inhabit is manufactured of 'meaning' rather than 'information'.

Luhmann attempted to apply Maturana and Varela's notion of autopoiesis to social systems. "A core concept of social systems theory is derived from biological systems theory: the concept of autopoiesis. Chilean biologist Humberto Maturana come up with the concept to explain how biological systems such as cells are a product of their own production." "Systems exist by way of operational closure and this means that they each construct themselves and their own realities."

Educational aspects

The first definition of enaction was introduced by psychologist Jerome Bruner, who introduced enaction as 'learning by doing' in his discussion of how children learn, and how they can best be helped to learn. He associated enaction with two other ways of knowledge organization: Iconic and Symbolic.

"Any domain of knowledge (or any problem within that domain of knowledge) can be represented in three ways: by a set of actions appropriate for achieving a certain result (enactive representation); by a set of summary images or graphics that stand for a concept without defining it fully (iconic representation); and by a set of symbolic or logical propositions drawn from a symbolic system that is governed by rules or laws for forming and transforming propositions (symbolic representation)"

The term 'enactive framework' was elaborated upon by Francisco Varela and Humberto Maturana.

Sriramen argues that enactivism provides "a rich and powerful explanatory theory for learning and being." and that it is closely related to both the ideas of cognitive development of Piaget, and also the social constructivism of Vygotsky. Piaget focused on the child's immediate environment, and suggested cognitive structures like spatial perception emerge as a result of the child's interaction with the world. According to Piaget, children construct knowledge, using what they know in new ways and testing it, and the environment provides feedback concerning the adequacy of their construction. In a cultural context, Vygotsky suggested that the kind of cognition that can take place is not dictated by the engagement of the isolated child, but is also a function of social interaction and dialogue that is contingent upon a sociohistorical context. Enactivism in educational theory "looks at each learning situation as a complex system consisting of teacher, learner, and context, all of which frame and co-create the learning situation." Enactivism in education is very closely related to situated cognition, which holds that "knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used." This approach challenges the "separating of what is learned from how it is learned and used."

Artificial intelligence aspects

The ideas of enactivism regarding how organisms engage with their environment have interested those involved in robotics and man-machine interfaces. The analogy is drawn that a robot can be designed to interact and learn from its environment in a manner similar to the way an organism does, and a human can interact with a computer-aided design tool or data base using an interface that creates an enactive environment for the user, that is, all the user's tactile, auditory, and visual capabilities are enlisted in a mutually explorative engagement, capitalizing upon all the user's abilities, and not at all limited to cerebral engagement. In these areas it is common to refer to affordances as a design concept, the idea that an environment or an interface affords opportunities for enaction, and good design involves optimizing the role of such affordances.

The activity in the AI community has influenced enactivism as a whole. Referring extensively to modeling techniques for evolutionary robotics by Beer, the modeling of learning behavior by Kelso, and to modeling of sensorimotor activity by Saltzman, McGann, De Jaegher, and Di Paolo discuss how this work makes the dynamics of coupling between an agent and its environment, the foundation of enactivism, "an operational, empirically observable phenomenon." That is, the AI environment invents examples of enactivism using concrete examples that, although not as complex as living organisms, isolate and illuminate basic principles.

Animal ethics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Animal_ethics

Animal ethics is a branch of ethics which examines human-animal relationships, the moral consideration of animals and how nonhuman animals ought to be treated. The subject matter includes animal rights, animal welfare, animal law, speciesism, animal cognition, wildlife conservation, wild animal suffering, the moral status of nonhuman animals, the concept of nonhuman personhood, human exceptionalism, the history of animal use, and theories of justice. Several different theoretical approaches have been proposed to examine this field, in accordance with the different theories currently defended in moral and political philosophy. There is no theory which is completely accepted due to the differing understandings of what is meant by the term ethics; however, there are theories that are more widely accepted by society such as animal rights and utilitarianism.

History

The history of the regulation of animal research was a fundamental step towards the development of animal ethics, as this was when the term "animal ethics" first emerged. In the beginning, the term "animal ethics" was associated solely with cruelty, only changing in the late 20th-century, when it was deemed inadequate in modern society. The United States Animal Welfare Act of 1966, attempted to tackle the problems of animal research; however, their effects were considered futile. Many did not support this act as it communicated that if there was human benefit resulting from the tests, the suffering of the animals was justifiable. It was not the establishment of the animal rights movement, that people started supporting and voicing their opinions in public. Animal ethics was expressed through this movement and led to big changes to the power and meaning of animal ethics.

Animal rights

The first animal rights laws were first introduced between 1635–1780. In 1635, Ireland was the first country to pass animal protection legislation, "An Act against Plowing by the Tayle, and pulling the Wooll off living Sheep". In 1641, Massachusetts colony's called Body of Liberties that includes regulation against any "Tirranny or Crueltie" towards animals. In 1687, Japan reintroduced a ban on eating meat and killing animals. In 1789, philosopher Jeremy Bentham argued in An Introduction to the Principles of Morals and Legislation, that an animal's capacity to suffer—not their intelligence—meant that they should be granted rights: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?"

Between 1822–1892, more laws were passed to protect animals. In 1822, the British Parliament passed the Cruel Treatment of Cattle Act. In 1824, the first animal rights society was founded in England by Richard Martin, Arthur Broome, Lewis Gompertz and William Wilberforce, the Society for the Prevention of Cruelty to Animals, which later became the RSPCA. The same year, Gompertz published Moral Inquiries on the Situation of Man and of Brutes, one of the first books advocating for what will be more than a century later known as veganism. In 1835, Britain passed the first Cruelty to Animals Act. In 1866, the American Society for the Prevention of Cruelty to Animals was founded by New Yorker Henry Bergh. In 1875, Frances Power Cobbe established the National Anti-Vivisection Society in Britain. In 1892, English social reformer Henry Stephens Salt published Animal Rights: Considered in Relation to Social Progress.

In 1970, Richard D. Ryder coined speciesism, a term for discrimination against animals based on their species-membership. This term was popularized by the philosopher and ethicist Peter Singer in his 1975 book Animal Liberation. The late 1970s marked the beginnings of the animal rights movement, which portrayed the belief that animals must be recognised as sentient beings and protected from unessential harm. Since the 18th century, many groups have been organised supporting different aspects of animal rights and carrying out their support in differing ways. On one hand, "The Animal Liberation Front" is an English group that took the law into their own hands, orchestrating the Penn break-in, while a group such as "People for Ethical Treatment of Animals" founded in the US, although supporting the same goals, aim for legislative gains.

Animal testing

Animal testing for biomedical research dates to the writings of the ancient Greeks. It is understood that physician-scientists such as Aristotle, and Erasistratus carried out experiments on living animals. After them, there was also Galen, who was Greek but resided in Rome, carrying out experiments on living animals to improve on the knowledge of anatomy, physiology, pathology, and pharmacology. Animal testing since has evolved considerably and is still being carried out in the modern-day, with millions of experimental animals being used around the world. However, during recent years it has come under severe criticism by the public and animal activist groups. Those against, argue that the benefits that animal testing provides for humanity are not justifiable for the suffering of those animals. Those for, argue that animal testing is fundamental for the advancement of biomedical knowledge.

Drug testing on animals blew up in the 20th century. In 1937, a US pharmaceutical company created an infamous drug called "Elixir Sulfanilamide". This drug had a chemical called DEG in it which is toxic to humans, but at the time was not known to be harmful to humans. Without precautions, the drug was released to the public and was responsible for a mass poisoning. The DEG ended up killing over a hundred people, causing uproar among civilisation. Thus, in 1938 the U.S. Food and Drug Administration (FDA) established the Federal Food, Drug and Cosmetic Act. This ensured the testing of drugs on animals before marketing of the product, to confirm that it would have no harmful implications on humans.

However, since the regulations have been put in place, animal testing deaths have increased. More than one million animals are killed from testing every year in the US. In addition, the deaths of these animals are considered sickening; from inhaling toxic gas, having skin burned off, getting holes drilled into their skulls.

The 3 Rs

Laboratory rat
Laboratory rat with a brain implant being fed
 

The 3 Rs were first introduced in a 1959 book called "The Principles of Humane Experimental technique" by zoologist W. M. S. Russell, and microbiologist R. L. Burch. The 3 Rs stand for Replacement, Reduction, and Refinement and are the guiding principles for the ethical treatment of animals used for testing and experimentation:

  1. Replacement: Avoiding using an animal for testing by switching out the animal for something non-living, such as a computer model, or an animal which is less susceptible to pain in relation to the experiment.
  2. Reduction: Devising a plan to use the fewest animals possible; a combination of using fewer animals to gain sufficient data, and maximising the amount of data from each animal to use fewer animals.
  3. Refinement: A decrease in any unnecessary pain inflicted on the animal; adapting experimental procedures to minimise suffering.

The Three Rs principles are now widely accepted by many countries and are used in any practises that involve the experimentation of animals.

Ethical guidelines for animal research

There is a wide range of ethical assessments regarding animals used in research. There are general opinions that animals do have a moral status and how they are treated should be subjected to ethical consideration; some of the positions include:

  • Animals have intrinsic values that must be respected.
  • Animals can feel pain and their interests must be taken into consideration.
  • Our treatment of all animals/lab animals reflects on our attitudes and influences us on our moral beings.

The Norwegian National Committee for Research Ethics in Science and Technology (NENT) have a set of ethical guidelines for the use of animals in research:

  1. Respect Animal Dignity: Researchers must have respect towards the animals' worth, regardless of their value and the animals' interests as living, sentient creatures. Researchers have to have respect when choosing their topics/methods, and when expanding their research. Researchers also have to supply care that is adapted to needs to each laboratory animal.
  2. Responsibility for considering options (Replace): When there are alternatives available, researchers are responsible for studying those alternatives for animal experimentation. When there are no good alternatives available, researchers have to consider if the research can be postponed until a good alternative are developed. While being able to justify the experiments on animals, researchers then have to be accountable for the absence of alternative options and the urge to obtain the knowledge immediately.
  3. The principle of proportionality: responsibility for considering and balancing suffering and benefit: Researchers have to consider both the risks of pain and suffering that laboratory animals will face and assess them in the value of the relationship to the research of animals, people, and the environment. Researchers have a responsibility on whether or not the research will have improvements for the animals, people or the environment. All of the possible benefits of the study has to be considered, substantiated and specified in both the short and long run. This responsibility also entails the obligation to consider both the scientific quality of the experiment and whether or not the experiment will have relevant scientific benefits. Suffering can only be caused by animals if there is a counterbalance of substantial and probable benefits for animals, people or the environment. Since there are many methods of analyzing the harm and the benefits, research institutions have to provide training on suitable models and researchers have the responsibility to use the methods of analysis when planning any experiments on animals(see guideline 5).
  4. Responsibility for considering reducing the number of animals (Reduce): Researchers have the responsibility to consider whether or not it's acceptable to reduce the number of animals that an experiment's plan on using and include the number necessary to both the scientific quality of the experiments and the relevance to the results only. Before the experiment, researchers have to conduct reading studies and consider alternative designs and perform the calculations that are needed before beginning an experiment.
  5. Responsibility for minimizing the risk of suffering and improving animal welfare (Refine): Researchers have the responsibility to assess the expected effect on laboratory animals. Researchers have to lessen the risk of suffering and provide excellent animal welfare. Suffering includes pain, hunger, malnutrition, thirst, abnormal cold/heat. fear, stress, illness, injury, and restrictions to where the animal can't be able to behave naturally and normally. To find out what is a considerable amount of suffering, a researcher's assessment should be based on which animal suffers the most. Considering the animals is the deciding factor if there are any doubts about regarding the suffering the animals will face. Researchers have to consider the direct suffering that the animal might endure during an experiment, but there are risks before and after the suffering, including breeding, transportation, trapping, euthanizing, labeling, anesthetizing, and stabling. This means that all the researchers have to take into account the needs of periods for adaptation before and after an experiment.
  6. Responsibility for maintaining biological diversity: Researchers are also responsible for ensuring that the use of laboratory animals don't disrupt or endanger biological diversity. This means that researchers have to consider the consequences to the stock and their ecosystem as a whole. The use of endangered species has to be reduced to a minimum. When there is credible and uncertain knowledge that the inclusion of animals in research and the use of certain methods may have ethically unacceptable consequences for the stock and the ecosystem as a whole, researchers must observe the precautionary principle.
  7. Responsibility when intervening in a habitat: Researchers have a responsibility for reducing the disruption and any impact of the natural behaviors of the animals, including those who aren't direct test subjects in research, as well as the population and their surroundings. Most research and technology-related projects, like the ones regarding environmental technology and surveillance, might impact the animals and their living arrangements. In those cases, researchers have to seek to observe the principle of proportionality and to decrease possible negative impact(see guideline 3).
  8. Responsibility for openness and sharing of data and material: Researchers have the responsibility for ensuring the transparency of the research findings and facilitating sharing the data and materials from all animal experiments. Transparency and sharing are important in order to not repeat the same experiments on animals. Transparency is also important in order to release the data to the public and a part of researchers' responsibility for dissimulation. Negative results of the experiments on animals have should be public knowledge. Releasing negative results to other researchers could give them more on the information about which experiments that are not worth pursuing, shine a light on unfortunate research designs, and can help reduce the number of animals used in research.
  9. Requirement of expertise on animals: Researchers and other parties who work and handle live animals are required to have adequate and updated documentation expertise on all animals. This includes knowledge about the biology of the animal species in question, and willingly be able to take care of the animals properly.
  10. Requirement of due care: There are many laws, rules, international convention, and agreements regarding the laboratory animals that both the researchers and the research managers have to comply with. Anyone who wants to use animals in experiments should familiarize themselves with the current rules.

Ethical theories

Ethical thinking has influenced the way society perceives animal ethics in at least three ways. Firstly, the original rise of animal ethics and how animals should be treated. Secondly, the evolution of animal ethics as people started to realise that this ideology was not as simple as was first proposed. The third way, is through the challenges humans face contemplating these ethics; consistency of morals, and the justification of some cases.

Consequentialism

Consequentialism is a collection of ethical theories which judge the rightness or wrongness of an action on its consequences; if the actions brings more benefit than harm, it is good, if it brings more harm than benefit, it is bad. The most well-known type of consequentialism theory is utilitarianism.

The publication of Peter Singer's book Animal Liberation, in 1975, gathered sizeable traction and provided him with a platform to speak his mind on the issues of animal rights. Due to the attention Singer received, his views were the most accessible, and therefore best known by the public. He supported the theory of utilitarianism, which is still a controversial but highly regarded foundation for animal research. The theory of utilitarianism states that "an action is right if and only if it produces a better balance of benefits and harms than available alternative actions", thus, this theory determines whether or not something is right by weighing the pleasure against the suffering of the result. It is not concerned with the process, only the weight of the consequence against the process, and while the consequentialism theory suggests if an action is bad or good, utilitarianism only focuses on the benefit of the outcome. While this may be able to be applied to some animal research and raising for food, several objections have been raised against utilitarianism. Singer made his decision to support utilitarianism on the basis of sentience, selecting that aspect as the differential factor between human and animals; the ability of self-consciousness, autonomy and to act morally. This ended up being called "The argument from marginal cases". However, critics allege that not all morally relevant beings fall under this category, for instance, some people with in a persistent vegetative state who have no awareness of themselves or their surroundings. Based on Singer's arguments, it would be as (or more) justified to carry out experiments in medical research on these non-sentient humans than on other (sentient) animals. Another limitation of applying utilitarianism to animal ethics is that it is difficult to accurately measure and compare the suffering of the harmed animals to the gains of the beneficiaries, for instance, in medical experiments.

Jeff Sebo argues that utilitarianism has three main implications for animal ethics: "First, utilitarianism plausibly implies that all vertebrates and at least some invertebrates morally matter, and that large animals like elephants matter more on average and that small animals like ants might matter more in total. Second, utilitarianism plausibly implies that we morally ought to attempt to both promote animal welfare and respect animal rights in many real-life cases. Third, utilitarianism plausibly implies that we should prioritize farmed and wild animal welfare and pursue a variety of interventions at once to make progress on these issues".

Deontology

Deontology is a theory that evaluates moral actions based only on doing one's duty, not on the consequences of the actions. This means that if it is your duty to carry out a task, it is morally right regardless of the consequences, and if you fail to do your duty, you are morally wrong. There are many types of deontological theories, however, the one most commonly recognised is often associated with Immanuel Kant. This ethical theory can be implemented from conflicting sides, for example, a researcher may think it is their duty to make an animal suffer to find a cure for a disease that is affecting millions of humans, which according to deontology is morally correct. On the other hand, an animal activist might think that saving these animals being tested on is their duty, creating a contradiction in this idea. Furthermore, another conflicting nature of this theory is when you must choose between two imposing moral duties, such as deciding if you should lie about where an escaped chicken went, or if you should tell the truth and send the chicken to its death. Lying is an immoral duty to carry out, however, so is sending a chicken to its death.

A highlighted flaw in Kant's theory is that it was not applicable to non-human animals, only specifically to humans. This theory opposes utilitarianism in the sense that instead of concerning itself with the consequence, it focuses on the duty. However, both are fundamental theories that contribute to animal ethics.

Virtue ethics

Virtue ethics does not pinpoint on either the consequences or duty of the action, but from the act of behaving like a virtuous person. Thus, asking if such actions would stem from a virtuous person or someone with a vicious nature. If it would stem from someone virtuous, it is said that it is morally right, and if from a vicious person, immoral behaviour. A virtuous person is said to hold qualities such as respect, tolerance, justice and equality. One advantage that this theory has over the others, is that it takes into account human emotions, affecting the moral decision, which was absent in the previous two. However, a flaw is that people's opinions of a virtuous person are very subjective, and thus, can drastically affect the person's moral compass. With this underlying issue, this ethical theory cannot be applied to all cases.

Relationship with environmental ethics

Differing conceptions of the treatment of and duties towards animals, particularly those living in the wild, within animal ethics and environmental ethics have been a source of conflict between the two ethical positions; some philosophers have made a case that the two positions are incompatible, while others have argued that such disagreements can be overcome.

Electrolysis of water

From Wikipedia, the free encyclopedia
 
Simple setup for demonstration of electrolysis of water at home
 
An AA battery in a glass of tap water with salt showing hydrogen produced at the negative terminal

Electrolysis of water is the process of using electricity to decompose water into oxygen and hydrogen gas by a process called electrolysis. Hydrogen gas released in this way can be used as hydrogen fuel, or remixed with the oxygen to create oxyhydrogen gas, which is used in welding and other applications.

Sometimes called water splitting, electrolysis requires a minimum potential difference of 1.23 volts, though at that voltage external heat is required from the environment.

History

Device invented by Johann Wilhelm Ritter to develop the electrolysis of water

Jan Rudolph Deiman and Adriaan Paets van Troostwijk used, in 1789, an electrostatic machine to make electricity which was discharged on gold electrodes in a Leyden jar with water. In 1800 Alessandro Volta invented the voltaic pile, and a few weeks later the English scientists William Nicholson and Anthony Carlisle used it for the electrolysis of water. In 1806 Humphry Davy reported the results of extensive distilled water electrolysis experiments concluding that nitric acid was produced at the anode from dissolved atmospheric nitrogen gas. He used a high voltage battery and non-reactive electrodes and vessels such as gold electrode cones that doubled as vessels bridged by damp asbestos. When Zénobe Gramme invented the Gramme machine in 1869 electrolysis of water became a cheap method for the production of hydrogen. A method of industrial synthesis of hydrogen and oxygen through electrolysis was developed by Dmitry Lachinov in 1888.

Principle

A DC electrical power source is connected to two electrodes, or two plates (typically made from an inert metal such as platinum or iridium) which are placed in the water. Hydrogen will appear at the cathode (where electrons enter the water), and oxygen will appear at the anode. Assuming ideal faradaic efficiency, the amount of hydrogen generated is twice the amount of oxygen, and both are proportional to the total electrical charge conducted by the solution. However, in many cells competing side reactions occur, resulting in different products and less than ideal faradaic efficiency.

Electrolysis of pure water requires excess energy in the form of overpotential to overcome various activation barriers. Without the excess energy, the electrolysis of pure water occurs very slowly or not at all. This is in part due to the limited self-ionization of water. Pure water has an electrical conductivity about one-millionth that of seawater. Many electrolytic cells may also lack the requisite electrocatalysts. The efficiency of electrolysis is increased through the addition of an electrolyte (such as a salt, an acid or a base) and the use of electrocatalysts.

Currently the electrolytic process is rarely used in industrial applications since hydrogen can currently be produced more affordably from fossil fuels.

Equations

Diagram showing the overall chemical equation.

In pure water at the negatively charged cathode, a reduction reaction takes place, with electrons (e) from the cathode being given to hydrogen cations to form hydrogen gas. The half reaction, balanced with acid, is:

Reduction at cathode: 2 H+(aq) + 2e → H2(g)

At the positively charged anode, an oxidation reaction occurs, generating oxygen gas and giving electrons to the anode to complete the circuit:

Oxidation at anode: 2 H2O(l) → O2(g) + 4 H+(aq) + 4e

The same half-reactions can also be balanced with the base as listed below. Not all half-reactions must be balanced with acid or base. Many do, like the oxidation or reduction of water listed here. To add half reactions they must both be balanced with either acid or base. The acid-balanced reactions predominate in acidic (low pH) solutions, while the base-balanced reactions predominate in basic (high pH) solutions.

Cathode (reduction): 2 H2O(l) + 2e H2(g) + 2 OH(aq)
Anode (oxidation): 2 OH(aq) 1/2 O2(g) + H2O(l) + 2 e

Combining either half reaction pair yields the same overall decomposition of water into oxygen and hydrogen:

Overall reaction: 2 H2O(l) → 2 H2(g) + O2(g)

The number of hydrogen molecules produced is thus twice the number of oxygen molecules. Assuming equal temperature and pressure for both gases, the produced hydrogen gas has, therefore, twice the volume of the produced oxygen gas. The number of electrons pushed through the water is twice the number of generated hydrogen molecules and four times the number of generated oxygen molecules.

Thermodynamics

Pourbaix diagram for water, including equilibrium regions for water, oxygen and hydrogen at STP. The vertical scale is the electrode potential of hydrogen or non-interacting electrode relative to an SHE electrode, the horizontal scale is the pH of the electrolyte (otherwise non-interacting). Neglecting overpotential, above the top line the equilibrium condition is oxygen gas, and oxygen will bubble off of the electrode until equilibrium is reached. Likewise, below the bottom line, the equilibrium condition is hydrogen gas, and hydrogen will bubble off of the electrode until equilibrium is reached.

The decomposition of pure water into hydrogen and oxygen at standard temperature and pressure is not favorable in thermodynamic terms.

Anode (oxidation): 2 H2O(l) O2(g) + 4 H+(aq) + 4e    Eo = +1.23 V (for the reduction half-equation)
Cathode (reduction): 2 H+(aq) + 2e H2(g)
Eo = 0.00 V

Thus, the standard potential of the water electrolysis cell (Eocell = Eocathode − Eoanode) is -1.229 V at 25 °C at pH 0 ([H+] = 1.0 M). At 25 °C with pH 7 ([H+] = 1.0×10−7 M), the potential is unchanged based on the Nernst equation. The thermodynamic standard cell potential can be obtained from standard-state free energy calculations to find ΔG° and then using the equation: ΔG°= −n F E° (where E° is the cell potential and F the Faraday constant, i.e. 96,485.3321233 C/mol). For two water molecules electrolysed and hence two hydrogen molecules formed, n = 4, and ΔG° = 474.48 kJ/2 mol(water) = 237.24 kJ/mol(water), and ΔS° = 163 J/K mol(water), and ΔH° = 571.66 kJ/2 mol(water) = 285.83 kJ/mol(water), and finally 141.86 MJ/kg(H2) . However, calculations regarding individual electrode equilibrium potentials requires some corrections taking into account the activity coefficients. In practice when an electrochemical cell is "driven" toward completion by applying reasonable potential, it is kinetically controlled. Therefore, activation energy, ion mobility (diffusion) and concentration, wire resistance, surface hindrance including bubble formation (causes electrode area blockage), and entropy, require a greater applied potential to overcome these factors. The amount of increase in potential required is termed the overpotential.

Electrolyte selection

If the above described processes occur in pure water, H+ cations will be consumed/reduced at the cathode and OH anions will be consumed/oxidised at the anode. This can be verified by adding a pH indicator to the water: the water near the cathode is basic while the water near the anode is acidic. The negative hydroxide ions that approach the anode mostly combine with the positive hydronium ions (H3O+) to form water. The positive hydronium ions that approach the cathode mostly combine with negative hydroxide ions to form water. Relatively few hydroniums/hydroxide ions reach the cathode/anode. This can cause a concentration overpotential at both electrodes.

Pure water is a fairly good insulator since it has a low autoionization, Kw = 1.0×10−14 at room temperature and thus pure water conducts current poorly, 0.055 µS·cm−1. Unless a very large potential is applied to cause an increase in the autoionization of water the electrolysis of pure water proceeds very slowly limited by the overall conductivity.

If a water-soluble electrolyte is added, the conductivity of the water rises considerably. The electrolyte disassociates into cations and anions; the anions rush towards the anode and neutralize the buildup of positively charged H+ there; similarly, the cations rush towards the cathode and neutralize the buildup of negatively charged OH there. This allows the continuous flow of electricity.

Electrolyte for water electrolysis

Care must be taken in choosing an electrolyte since an anion from the electrolyte competes with the hydroxide ions to give up an electron. An electrolyte anion with less standard electrode potential than hydroxide will be oxidized instead of the hydroxide, and no oxygen gas will be produced. A cation with a greater standard electrode potential than a hydrogen ion will be reduced instead, and no hydrogen gas will be produced.

The following cations have lower electrode potential than H+ and are therefore suitable for use as electrolyte cations: Li+, Rb+, K+, Cs+, Ba2+, Sr2+, Ca2+, Na+, and Mg2+. Sodium and lithium are frequently used, as they form inexpensive, soluble salts.

If an acid is used as the electrolyte, the cation is H+, and there is no competitor for the H+ created by disassociating water. The most commonly used anion is sulfate (SO2−
4
), as it is very difficult to oxidize, with the standard potential for oxidation of this ion to the peroxydisulfate ion being +2.010 volts.

Strong acids such as sulfuric acid (H2SO4), and strong bases such as potassium hydroxide (KOH), and sodium hydroxide (NaOH) are frequently used as electrolytes due to their strong conducting abilities.

A solid polymer electrolyte can also be used such as Nafion and when applied with a special catalyst on each side of the membrane can efficiently split the water molecule with as little as 1.5  volts. Several other solid electrolyte systems have been trialed and developed with several electrolysis systems now available commercially that use solid electrolytes.

Pure water electrolysis

Electrolyte-free pure water electrolysis has been achieved by using deep-sub-Debye-length nanogap electrochemical cells. When the gap distance between cathode and anode even smaller than Debye-length (1 micron in pure water, around 220 nm in distilled water), the double layer regions from two electrodes can overlap, leading to uniformly high electric field distributed inside the entire gap. Such a high electric field can significantly enhance the ion transport inside the water (mainly due to migration), further enhancing self-ionization of water and keeping the whole reaction continuing, and showing small resistance between the two electrodes. In this case, the two half-reactions are coupled together and limited by electron-transfer steps (electrolysis current saturated when further reducing the electrode distance).

Techniques

Fundamental demonstration

Two leads, running from the terminals of a battery, are placed in a cup of water with a quantity of electrolyte to establish conductivity in the solution. Using NaCl (table salt) in an electrolyte solution results in chlorine gas rather than oxygen due to a competing half-reaction. With the correct electrodes and correct electrolyte, such as baking soda (sodium bicarbonate), hydrogen and oxygen gases will stream from the oppositely charged electrodes. Oxygen will collect at the positively charged electrode (anode) and hydrogen will collect at the negatively charged electrode (cathode). Note that hydrogen is positively charged in the H2O molecule, so it ends up at the negative electrode. (And vice versa for oxygen.)

Note that an aqueous solution of water with chloride ions, when electrolyzed, will result in either OH if the concentration of Cl is low, or in chlorine gas being preferentially discharged if the concentration of Cl is greater than 25% by mass in the solution.

Match test used to detect the presence of hydrogen gas

Hofmann voltameter

The Hofmann voltameter is often used as a small-scale electrolytic cell. It consists of three joined upright cylinders. The inner cylinder is open at the top to allow the addition of water and the electrolyte. A platinum electrode is placed at the bottom of each of the two side cylinders, connected to the positive and negative terminals of a source of electricity. When current is run through the Hofmann voltameter, gaseous oxygen forms at the anode (positive) and gaseous hydrogen at the cathode (negative). Each gas displaces water and collects at the top of the two outer tubes, where it can be drawn off with a stopcock.

Industrial

Many industrial electrolysis cells are very similar to Hofmann voltameters, with complex platinum plates or honeycombs as electrodes. Generally, the only time hydrogen is intentionally produced from electrolysis is for the specific point of use application such as is the case with oxyhydrogen torches or when extremely high purity hydrogen or oxygen is desired. The vast majority of hydrogen is produced from hydrocarbons and as a result, contains trace amounts of carbon monoxide among other impurities. The carbon monoxide impurity can be detrimental to various systems including many fuel cells.

High-pressure

High-pressure electrolysis is the electrolysis of water with a compressed hydrogen output around 12–20 MPa (120–200 Bar, 1740–2900 psi). By pressurising the hydrogen in the electrolyser, the need for an external hydrogen compressor is eliminated; the average energy consumption for internal compression is around 3%.

High-temperature

High-temperature electrolysis (also HTE or steam electrolysis) is a method currently being investigated for water electrolysis with a heat engine. High temperature electrolysis may be preferable to traditional room-temperature electrolysis because some of the energy is supplied as heat, which is cheaper than electricity, and because the electrolysis reaction is more efficient at higher temperatures.

Alkaline water

A water ionizer (also known as analkaline ionizer) is a home appliance which claims to raise the pH of drinking water by using electrolysis to separate the incoming water stream into acidic and alkaline components. The alkaline stream of the treated water is called "alkaline water". Proponents claim that consumption of alkaline water results in a variety of health benefits, making it similar "to the alternative health practice of alkaline diets. Such claims violate basic principles of chemistry and physiology. There is no medical evidence for any health benefits of alkaline water. Extensive scientific evidence has completely debunked these claims.

The machines originally became popular in Japan and other East Asian countries before becoming available in the U.S. and Europe.

Polymer electrolyte membrane

A proton-exchange membrane, or polymer-electrolyte membrane (PEM), is a semipermeable membrane generally made from ionomers and designed to conduct protons while acting as an electronic insulator and reactant barrier, e.g. to oxygen and hydrogen gas. This is their essential function when incorporated into a membrane electrode assembly (MEA) of a proton-exchange membrane fuel cell or of a proton-exchange membrane electrolyser: separation of reactants and transport of protons while blocking a direct electronic pathway through the membrane.

PEMs can be made from either pure polymer membranes or from composite membranes, where other materials are embedded in a polymer matrix. One of the most common and commercially available PEM materials is the fluoropolymer (PFSA) Nafion, a DuPont product. While Nafion is an ionomer with a perfluorinated backbone like Teflon, there are many other structural motifs used to make ionomers for proton-exchange membranes. Many use polyaromatic polymers, while others use partially fluorinated polymers.

Proton-exchange membranes are primarily characterized by proton conductivity (σ), methanol permeability (P), and thermal stability.

PEM fuel cells use a solid polymer membrane (a thin plastic film) which is permeable to protons when it is saturated with water, but it does not conduct electrons.

Supercritical water electrolysis

Supercritical water electrolysis (SWE) uses water in a supercritical state, which changes the properties of water such that less electrical energy is required to split the water bonds of hydrogen and oxygen, therefore improving the electrical efficiency and reducing the costs. The increased temperature (>375°C) reduces thermodynamic barriers and increases kinetics, improved ionic conductivity over liquid or gaseous water, reduces the ohmic losses. Benefits include improved electrical efficiency, >221bar pressurised delivery of product gases, ability to operate at high current densities and low dependence on precious metals for catalysts. No commercial SWE equipment exists, however there are companies attempting to commercialise.

Nickel/iron

In 2014, researchers announced an electrolysis system made of inexpensive, abundant nickel and iron rather than precious metal catalysts, such as platinum or iridium. The nickel-metal/nickel-oxide structure is more active than pure nickel metal or pure nickel oxide alone. The catalyst significantly lowers the required voltage. Also nickel–iron batteries are being investigated for use as combined batteries and electrolysis for hydrogen production. Those "battolysers" could be charged and discharged like conventional batteries, and would produce hydrogen when fully charged.

Nanogap electrochemical cells

In 2017, researchers reported using nanogap electrochemical cells to achieve high-efficiency electrolyte-free pure water electrolysis at room temperature. In nanogap electrochemical cells, the two electrodes are so close to each other (even smaller than Debye-length in pure water) that the mass transport rate can be even higher than the electron-transfer rate, leading to two half-reactions coupled together and limited by electron-transfer step. Experiments show that the electrical current density from pure water electrolysis can be even larger than that from 1  mol/L sodium hydroxide solution. The mechanism, "Virtual Breakdown Mechanism", is completely different from the well-established traditional electrochemical theory, due to such nanogap size effect.

Applications

About five percent of hydrogen gas produced worldwide is created by electrolysis. Currently, most industrial methods produce hydrogen from natural gas instead, in the steam reforming process. The majority of the hydrogen produced through electrolysis is a side product in the production of chlorine and caustic soda. This is a prime example of a competing for side reaction.

2NaCl + 2H2O → Cl2 + H2 + 2NaOH

In the chloralkali process (electrolysis of brine) a water/sodium chloride mixture is only half the electrolysis of water since the chloride ions are oxidized to chlorine rather than water being oxidized to oxygen. Thermodynamically, this would not be expected since the oxidation potential of the chloride ion is less than that of water, but the rate of the chloride reaction is much greater than that of water, causing it to predominate. The hydrogen produced from this process is either burned (converting it back to water), used for the production of specialty chemicals, or various other small-scale applications.

Water electrolysis is also used to generate oxygen for the International Space Station.

Additionally, many car companies have recently begun researching using water as a fuel source, converting it into hydrogen and oxygen via water electrolysis, and using the hydrogen as fuel in a Hydrogen vehicle, however, have not met much success due to the unstable characteristics of hydrogen as a fuel source.

Efficiency

Industrial output

Illustrating inputs and outputs of simple electrolysis of water, for production of hydrogen.

Efficiency of modern hydrogen generators is measured by energy consumed per standard volume of hydrogen (MJ/m3), assuming standard temperature and pressure of the H2. The lower the energy used by a generator, the higher its efficiency would be; a 100%-efficient electrolyser would consume 39.4 kilowatt-hours per kilogram (142 MJ/kg) of hydrogen, 12,749 joules per litre (12.75 MJ/m3). Practical electrolysis (using a rotating electrolyser at 15 bar pressure) may consume 50 kW⋅h/kg (180 MJ/kg), and a further 15 kW⋅h (54 MJ) if the hydrogen is compressed for use in hydrogen cars.

Electrolyzer vendors provide efficiencies based on enthalpy. To assess the claimed efficiency of an electrolyzer it is important to establish how it was defined by the vendor (i.e. what enthalpy value, what current density, etc.).

There are two main technologies available on the market, alkaline and proton exchange membrane (PEM) electrolyzers. Alkaline electrolyzers are cheaper in terms of investment (they generally use nickel catalysts), but less efficient; PEM electrolyzers, conversely, are more expensive (they generally use expensive platinum-group metal catalysts) but are more efficient and can operate at higher current densities, and can, therefore, be possibly cheaper if the hydrogen production is large enough.

Conventional alkaline electrolysis has an efficiency of about 70%. Accounting for the accepted use of the higher heat value (because inefficiency via heat can be redirected back into the system to create the steam required by the catalyst), average working efficiencies for PEM electrolysis are around 80%. This is expected to increase to between 82–86% before 2030. Theoretical efficiency for PEM electrolysers are predicted up to 94%.

H2 production cost ($-gge untaxed) at varying natural gas prices

Considering the industrial production of hydrogen, and using current best processes for water electrolysis (PEM or alkaline electrolysis) which have an effective electrical efficiency of 70–80%, producing 1 kg of hydrogen (which has a specific energy of 143 MJ/kg) requires 50–55 kW⋅h (180–200 MJ) of electricity. At an electricity cost of $0.06/kW·h, as set out in the US Department of Energy hydrogen production targets for 2015, the hydrogen cost is $3/kg. With the range of natural gas prices from 2016 as shown in the graph (Hydrogen Production Tech Team Roadmap, November 2017) putting the cost of steam-methane-reformed (SMR) hydrogen at between $1.20 and $1.50, the cost price of hydrogen via electrolysis is still over double 2015 DOE hydrogen target prices. The US DOE target price for hydrogen in 2020 is $2.30/kg, requiring an electricity cost of $0.037/kW·h, which is achievable given 2018 PPA tenders for wind and solar in many regions. This puts the $4/gasoline gallon equivalent (gge) H2 dispensed objective well within reach, and close to a slightly elevated natural gas production cost for SMR.

In other parts of the world, the price of SMR hydrogen is between $1–3/kg on average. This makes production of hydrogen via electrolysis cost competitive in many regions already, as outlined by Nel Hydrogen and others, including an article by the IEA examining the conditions which could lead to a competitive advantage for electrolysis.

Some large industrial electrolyzers are operating at several megawatts. As of 2022, the largest is a 150 MW alkaline facility in Ningxia, China, with a capacity up to 23,000 tonnes per year.

Overpotential

Real water electrolyzers require higher voltages for the reaction to proceed. The part that exceeds 1.23 V is called overpotential or overvoltage, and represents any kind of loss and nonideality in the electrochemical process.

For a well designed cell the largest overpotential is the reaction overpotential for the four-electron oxidation of water to oxygen at the anode; electrocatalysts can facilitate this reaction, and platinum alloys are the state of the art for this oxidation. Developing a cheap, effective electrocatalyst for this reaction would be a great advance, and is a topic of current research; there are many approaches, among them a 30-year-old recipe for molybdenum sulfide, graphene quantum dots, carbon nanotubes, perovskite, and nickel/nickel-oxide. Tri‐molybdenum phosphide (Mo3P) has been recently found as a promising nonprecious metal and earth‐abundant candidate with outstanding catalytic properties that can be used for electrocatalytic processes. The catalytic performance of Mo3P nanoparticles is tested in the hydrogen evolution reaction (HER), indicating an onset potential of as low as 21 mV, H2 formation rate, and exchange current density of 214.7 µmol s−1 g−1 cat (at only 100 mV overpotential) and 279.07 µA cm−2, respectively, which are among the closest values yet observed to platinum. The simpler two-electron reaction to produce hydrogen at the cathode can be electrocatalyzed with almost no overpotential by platinum, or in theory a hydrogenase enzyme. If other, less effective, materials are used for the cathode (e.g. graphite), large overpotentials will appear.

Thermodynamics

The electrolysis of water in standard conditions requires a theoretical minimum of 237 kJ of electrical energy input to dissociate each mole of water, which is the standard Gibbs free energy of formation of water. It also requires energy to overcome the change in entropy of the reaction. Therefore, the process cannot proceed below 286 kJ per mol if no external heat/energy is added.

Since each mole of water requires two moles of electrons, and given that the Faraday constant F represents the charge of a mole of electrons (96485 C/mol), it follows that the minimum voltage necessary for electrolysis is about 1.23 V. If electrolysis is carried out at high temperature, this voltage reduces. This effectively allows the electrolyser to operate at more than 100% electrical efficiency. In electrochemical systems this means that heat must be supplied to the reactor to sustain the reaction. In this way thermal energy can be used for part of the electrolysis energy requirement. In a similar way the required voltage can be reduced (below 1 V) if fuels (such as carbon, alcohol, biomass) are reacted with water (PEM based electrolyzer in low temperature) or oxygen ions (solid oxide electrolyte based electrolyzer in high temperature). This results in some of the fuel's energy being used to "assist" the electrolysis process and can reduce the overall cost of hydrogen produced.

However, observing the entropy component (and other losses), voltages over 1.48 V are required for the reaction to proceed at practical current densities (the thermoneutral voltage).

In the case of water electrolysis, Gibbs free energy represents the minimum work necessary for the reaction to proceed, and the reaction enthalpy is the amount of energy (both work and heat) that has to be provided so the reaction products are at the same temperature as the reactant (i.e. standard temperature for the values given above). Potentially, an electrolyzer operating at 1.48 V would be 100% efficient.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...